You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@parquet.apache.org by Sumit Khanna <su...@askme.in> on 2016/08/06 04:36:40 UTC

parquet : Can not read value at 0 in block -1

Hello,

I have a hunch that I am trying to mirror into HDFS (parquet format) into a
schema with datatypes not supported / conflicting with parquet.

has anyone before experienced any issues with parquet and spark?

this is the error trace on firing a simple select * :


   - Bad status for request TFetchResultsReq(fetchType=0,
   operationHandle=TOperationHandle(hasResultSet=True, modifiedRowCount=None,
   operationType=0,
   operationId=THandleIdentifier(secret='b=\x92\x0f\xc86E\xcc\xae\xf6*\xba|L;\xf3',
   guid='\xe0\x96\xac\x1aqz@G\x99Q\x16\x86\x90;0 ')), orientation=4,
   maxRows=100): TFetchResultsResp(status=TStatus(errorCode=0,
   errorMessage='java.io.IOException: parquet.io.ParquetDecodingException: Can
   not read value at 0 in block -1 in file
   hdfs://askmehadoop/parquet1_crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet',
   sqlState=None,
   infoMessages=['*org.apache.hive.service.cli.HiveSQLException:java.io.IOException:
   parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in
   file
   hdfs://askmehadoop/parquet1_crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:25:24',
   'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:352',
   'org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:220',
   'org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:685',
   'sun.reflect.GeneratedMethodAccessor63:invoke::-1',
   'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43',
   'java.lang.reflect.Method:invoke:Method.java:498',
   'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78',
   'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36',
   'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63',
   'java.security.AccessController:doPrivileged:AccessController.java:-2',
   'javax.security.auth.Subject:doAs:Subject.java:422',
   'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1657',
   'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59',
   'com.sun.proxy.$Proxy22:fetchResults::-1',
   'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:454',
   'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:672',
   'org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553',
   'org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1538',
   'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
   'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39',
   'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56',
   'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
   'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142',
   'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617',
   'java.lang.Thread:run:Thread.java:745',
   '*java.io.IOException:parquet.io.ParquetDecodingException: Can not read
   value at 0 in block -1 in file
   hdfs://askmehadoop/parquet1_crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:29:4',
   'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:507',
   'org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:414',
   'org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:140',
   'org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:1670',
   'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:347',
   '*parquet.io.ParquetDecodingException:Can not read value at 0 in block -1
   in file
   hdfs://askmehadoop/parquet1_crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:36:7',
   'parquet.hadoop.InternalParquetRecordReader:nextKeyValue:InternalParquetRecordReader.java:228',
   'parquet.hadoop.ParquetRecordReader:nextKeyValue:ParquetRecordReader.java:201',
   'org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper:<init>:ParquetRecordReaderWrapper.java:122',
   'org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper:<init>:ParquetRecordReaderWrapper.java:85',
   'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat:getRecordReader:MapredParquetInputFormat.java:72',
   'org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit:getRecordReader:FetchOperator.java:673',
   'org.apache.hadoop.hive.ql.exec.FetchOperator:getRecordReader:FetchOperator.java:323',
   'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:445',
   '*java.lang.UnsupportedOperationException:parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary:47:11',
   'parquet.column.Dictionary:decodeToBinary:Dictionary.java:44',
   'org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$BinaryConverter:setDictionary:ETypeConverter.java:227',
   'parquet.column.impl.ColumnReaderImpl:<init>:ColumnReaderImpl.java:339',
   'parquet.column.impl.ColumnReadStoreImpl:newMemColumnReader:ColumnReadStoreImpl.java:66',
   'parquet.column.impl.ColumnReadStoreImpl:getColumnReader:ColumnReadStoreImpl.java:61',
   'parquet.io.RecordReaderImplementation:<init>:RecordReaderImplementation.java:270',
   'parquet.io.MessageColumnIO$1:visit:MessageColumnIO.java:134',
   'parquet.io.MessageColumnIO$1:visit:MessageColumnIO.java:99',
   'parquet.filter2.compat.FilterCompat$NoOpFilter:accept:FilterCompat.java:154',
   'parquet.io.MessageColumnIO:getRecordReader:MessageColumnIO.java:99',
   'parquet.hadoop.InternalParquetRecordReader:checkRead:InternalParquetRecordReader.java:137',
   'parquet.hadoop.InternalParquetRecordReader:nextKeyValue:InternalParquetRecordReader.java:208'],
   statusCode=3), results=None, hasMoreRows=None)


this is the schema :

approve_status int
2 base_product_id bigint
3 bazaar_date_from timestamp
4 bazaar_date_to timestamp
5 bazaar_price string
6 bulk_order int
7 campaign_flag int
8 campaign_from timestamp
9 campaign_to timestamp
10 checkout_url string
11 created_date timestamp
12 cst_per double
13 dead_weight double
14 delivery string
15 delivery_timeline int
16 dispatch_sla int
17 getit_subscribed_product_id bigint
18 height string
19 is_cod int
20 is_deleted int
21 is_fc_inventory int
22 length string
23 modified_date timestamp
24 online_status string
25 product_status string
26 prompt int
27 prompt_key string
28 quantity int
29 sku string
30 split_count int
31 status boolean
32 store_id bigint
33 store_offer_price string
34 store_price string
35 subscribed_product_id bigint
36 subscribe_shipping_charge int
37 transfer_price string
38 vat_per double
39 volumetric_weight double
40 warranty string
41 weight string
42 width string
43 partitioned_on_product_status string
44 NULL NULL
45 # Partition Information NULL NULL
46 # col_name             data_type            comment
47 NULL NULL
48 partitioned_on_product_status string

Kindly let me know what the trouble could be. I am a bit sceptical about
the data types but the other tables wherein I have no such exception use
all timestamp bigint etc  as datatypes for columns.

Thanks,
Sumit

Re: parquet : Can not read value at 0 in block -1

Posted by Sumit Khanna <su...@askme.in>.
Also, view sample of only a few columns is not working. I think those alone
are the culprits. it is basically a double value but I expect that to be a
string in hive for some purpose, this is totally eating me out as in the
column values othrwise look good , they are all floats ranging from 0.0000
to 5894940.0000 and have String type in hive table schema.

Can anyone help please? Sorry for the spamming.

Thanks,

On Sat, Aug 6, 2016 at 11:18 AM, Sumit Khanna <su...@askme.in> wrote:

> Well anyways, even from Hue if I try loading partition wise data, it
> throws the same error. Am really really perplexed to what this bug really
> is.
> Thanks.
>
> Sumit
>
> On Sat, Aug 6, 2016 at 10:21 AM, Sumit Khanna <su...@askme.in>
> wrote:
>
>> However, I just saw that the data samples can be viewed fine in Hue UI
>> "sample". So I am afraid is it even a parquet error or something hive is
>> failing at.
>>
>> Thanks,
>>
>> On Sat, Aug 6, 2016 at 10:06 AM, Sumit Khanna <su...@askme.in>
>> wrote:
>>
>>> Hello,
>>>
>>> I have a hunch that I am trying to mirror into HDFS (parquet format)
>>> into a schema with datatypes not supported / conflicting with parquet.
>>>
>>> has anyone before experienced any issues with parquet and spark?
>>>
>>> this is the error trace on firing a simple select * :
>>>
>>>
>>>    - Bad status for request TFetchResultsReq(fetchType=0,
>>>    operationHandle=TOperationHandle(hasResultSet=True,
>>>    modifiedRowCount=None, operationType=0, operationId=THandleIdentifier(
>>>    secret='b=\x92\x0f\xc86E\xcc\xae\xf6*\xba|L;\xf3',
>>>    guid='\xe0\x96\xac\x1aqz@G\x99Q\x16\x86\x90;0 ')), orientation=4,
>>>    maxRows=100): TFetchResultsResp(status=TStatus(errorCode=0,
>>>    errorMessage='java.io.IOException: parquet.io.ParquetDecodingException:
>>>    Can not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_cr
>>>    mdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpaymen
>>>    t=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet',
>>>    sqlState=None, infoMessages=['*org.apache.hiv
>>>    e.service.cli.HiveSQLException:java.io.IOException: parquet.io
>>>    .ParquetDecodingException: Can not read value at 0 in block -1 in
>>>    file hdfs://askmehadoop/parquet1_crmdb_crmdb_prod_vtiger_salesord
>>>    er/partitioned_on_modeofpayment=Pay_Later/part-r-00000-63373
>>>    292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:25:24',
>>>    'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:352',
>>>    'org.apache.hive.service.cli.operation.OperationManager:getO
>>>    perationNextRowSet:OperationManager.java:220',
>>>    'org.apache.hive.service.cli.session.HiveSessionImpl:fetchRe
>>>    sults:HiveSessionImpl.java:685', 'sun.reflect.GeneratedMethodAccessor63:invoke::-1',
>>>    'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43',
>>>    'java.lang.reflect.Method:invoke:Method.java:498',
>>>    'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78',
>>>    'org.apache.hive.service.cli.session.HiveSessionProxy:access
>>>    $000:HiveSessionProxy.java:36', 'org.apache.hive.service.cli.s
>>>    ession.HiveSessionProxy$1:run:HiveSessionProxy.java:63',
>>>    'java.security.AccessController:doPrivileged:AccessController.java:-2',
>>>    'javax.security.auth.Subject:doAs:Subject.java:422', '
>>>    org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1657',
>>>    'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59',
>>>    'com.sun.proxy.$Proxy22:fetchResults::-1',
>>>    'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:454',
>>>    'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchRe
>>>    sults:ThriftCLIService.java:672', 'org.apache.hive.service.cli.t
>>>    hrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553',
>>>    'org.apache.hive.service.cli.thrift.TCLIService$Processor$Fe
>>>    tchResults:getResult:TCLIService.java:1538',
>>>    'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
>>>    'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39',
>>>    'org.apache.hive.service.auth.TSetIpAddressProcessor:process
>>>    :TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThr
>>>    eadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
>>>    'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142',
>>>    'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617',
>>>    'java.lang.Thread:run:Thread.java:745',
>>>    '*java.io.IOException:parquet.io.ParquetDecodingException: Can not
>>>    read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_cr
>>>    mdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpaymen
>>>    t=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:29:4',
>>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:507',
>>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:414',
>>>    'org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:140',
>>>    'org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:1670',
>>>    'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:347',
>>>    '*parquet.io.ParquetDecodingException:Can not read value at 0 in
>>>    block -1 in file hdfs://askmehadoop/parquet1_cr
>>>    mdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpaymen
>>>    t=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:36:7',
>>>    'parquet.hadoop.InternalParquetRecordReader:nextKeyValue:Int
>>>    ernalParquetRecordReader.java:228', 'parquet.hadoop.ParquetRecordR
>>>    eader:nextKeyValue:ParquetRecordReader.java:201', '
>>>    org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordRead
>>>    erWrapper:<init>:ParquetRecordReaderWrapper.java:122', '
>>>    org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordRead
>>>    erWrapper:<init>:ParquetRecordReaderWrapper.java:85', '
>>>    org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputForm
>>>    at:getRecordReader:MapredParquetInputFormat.java:72',
>>>    'org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputForm
>>>    atSplit:getRecordReader:FetchOperator.java:673',
>>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getRecordReader:FetchOperator.java:323',
>>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:445',
>>>    '*java.lang.UnsupportedOperationException:parquet.column.val
>>>    ues.dictionary.PlainValuesDictionary$PlainLongDictionary:47:11',
>>>    'parquet.column.Dictionary:decodeToBinary:Dictionary.java:44', '
>>>    org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter
>>>    $BinaryConverter:setDictionary:ETypeConverter.java:227',
>>>    'parquet.column.impl.ColumnReaderImpl:<init>:ColumnReaderImpl.java:339',
>>>    'parquet.column.impl.ColumnReadStoreImpl:newMemColumnReader:ColumnReadStoreImpl.java:66',
>>>    'parquet.column.impl.ColumnReadStoreImpl:getColumnReader:ColumnReadStoreImpl.java:61',
>>>    'parquet.io.RecordReaderImplementation:<init>:RecordReaderImplementation.java:270',
>>>    'parquet.io.MessageColumnIO$1:visit:MessageColumnIO.java:134',
>>>    'parquet.io.MessageColumnIO$1:visit:MessageColumnIO.java:99',
>>>    'parquet.filter2.compat.FilterCompat$NoOpFilter:accept:FilterCompat.java:154',
>>>    'parquet.io.MessageColumnIO:getRecordReader:MessageColumnIO.java:99',
>>>    'parquet.hadoop.InternalParquetRecordReader:checkRead:Intern
>>>    alParquetRecordReader.java:137', 'parquet.hadoop.InternalParque
>>>    tRecordReader:nextKeyValue:InternalParquetRecordReader.java:208'],
>>>    statusCode=3), results=None, hasMoreRows=None)
>>>
>>>
>>> this is the schema :
>>>
>>> approve_status int
>>> 2 base_product_id bigint
>>> 3 bazaar_date_from timestamp
>>> 4 bazaar_date_to timestamp
>>> 5 bazaar_price string
>>> 6 bulk_order int
>>> 7 campaign_flag int
>>> 8 campaign_from timestamp
>>> 9 campaign_to timestamp
>>> 10 checkout_url string
>>> 11 created_date timestamp
>>> 12 cst_per double
>>> 13 dead_weight double
>>> 14 delivery string
>>> 15 delivery_timeline int
>>> 16 dispatch_sla int
>>> 17 getit_subscribed_product_id bigint
>>> 18 height string
>>> 19 is_cod int
>>> 20 is_deleted int
>>> 21 is_fc_inventory int
>>> 22 length string
>>> 23 modified_date timestamp
>>> 24 online_status string
>>> 25 product_status string
>>> 26 prompt int
>>> 27 prompt_key string
>>> 28 quantity int
>>> 29 sku string
>>> 30 split_count int
>>> 31 status boolean
>>> 32 store_id bigint
>>> 33 store_offer_price string
>>> 34 store_price string
>>> 35 subscribed_product_id bigint
>>> 36 subscribe_shipping_charge int
>>> 37 transfer_price string
>>> 38 vat_per double
>>> 39 volumetric_weight double
>>> 40 warranty string
>>> 41 weight string
>>> 42 width string
>>> 43 partitioned_on_product_status string
>>> 44 NULL NULL
>>> 45 # Partition Information NULL NULL
>>> 46 # col_name             data_type            comment
>>> 47 NULL NULL
>>> 48 partitioned_on_product_status string
>>>
>>> Kindly let me know what the trouble could be. I am a bit sceptical about
>>> the data types but the other tables wherein I have no such exception use
>>> all timestamp bigint etc  as datatypes for columns.
>>>
>>> Thanks,
>>> Sumit
>>>
>>
>>
>

Re: parquet : Can not read value at 0 in block -1

Posted by Sumit Khanna <su...@askme.in>.
Well anyways, even from Hue if I try loading partition wise data, it throws
the same error. Am really really perplexed to what this bug really is.
Thanks.

Sumit

On Sat, Aug 6, 2016 at 10:21 AM, Sumit Khanna <su...@askme.in> wrote:

> However, I just saw that the data samples can be viewed fine in Hue UI
> "sample". So I am afraid is it even a parquet error or something hive is
> failing at.
>
> Thanks,
>
> On Sat, Aug 6, 2016 at 10:06 AM, Sumit Khanna <su...@askme.in>
> wrote:
>
>> Hello,
>>
>> I have a hunch that I am trying to mirror into HDFS (parquet format) into
>> a schema with datatypes not supported / conflicting with parquet.
>>
>> has anyone before experienced any issues with parquet and spark?
>>
>> this is the error trace on firing a simple select * :
>>
>>
>>    - Bad status for request TFetchResultsReq(fetchType=0,
>>    operationHandle=TOperationHandle(hasResultSet=True,
>>    modifiedRowCount=None, operationType=0, operationId=THandleIdentifier(
>>    secret='b=\x92\x0f\xc86E\xcc\xae\xf6*\xba|L;\xf3',
>>    guid='\xe0\x96\xac\x1aqz@G\x99Q\x16\x86\x90;0 ')), orientation=4,
>>    maxRows=100): TFetchResultsResp(status=TStatus(errorCode=0,
>>    errorMessage='java.io.IOException: parquet.io.ParquetDecodingException:
>>    Can not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_cr
>>    mdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpaymen
>>    t=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet',
>>    sqlState=None, infoMessages=['*org.apache.hiv
>>    e.service.cli.HiveSQLException:java.io.IOException: parquet.io
>>    .ParquetDecodingException: Can not read value at 0 in block -1 in
>>    file hdfs://askmehadoop/parquet1_crmdb_crmdb_prod_vtiger_salesord
>>    er/partitioned_on_modeofpayment=Pay_Later/part-r-00000-
>>    63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:25:24',
>>    'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:352',
>>    'org.apache.hive.service.cli.operation.OperationManager:getO
>>    perationNextRowSet:OperationManager.java:220',
>>    'org.apache.hive.service.cli.session.HiveSessionImpl:fetchRe
>>    sults:HiveSessionImpl.java:685', 'sun.reflect.GeneratedMethodAccessor63:invoke::-1',
>>    'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43',
>>    'java.lang.reflect.Method:invoke:Method.java:498',
>>    'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78',
>>    'org.apache.hive.service.cli.session.HiveSessionProxy:access
>>    $000:HiveSessionProxy.java:36', 'org.apache.hive.service.cli.s
>>    ession.HiveSessionProxy$1:run:HiveSessionProxy.java:63',
>>    'java.security.AccessController:doPrivileged:AccessController.java:-2',
>>    'javax.security.auth.Subject:doAs:Subject.java:422', '
>>    org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1657',
>>    'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59',
>>    'com.sun.proxy.$Proxy22:fetchResults::-1',
>>    'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:454',
>>    'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchRe
>>    sults:ThriftCLIService.java:672', 'org.apache.hive.service.cli.t
>>    hrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553',
>>    'org.apache.hive.service.cli.thrift.TCLIService$Processor$Fe
>>    tchResults:getResult:TCLIService.java:1538',
>>    'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
>>    'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39',
>>    'org.apache.hive.service.auth.TSetIpAddressProcessor:process
>>    :TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.TThr
>>    eadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
>>    'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142',
>>    'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617',
>>    'java.lang.Thread:run:Thread.java:745', '*java.io.IOException:parquet.io.ParquetDecodingException:
>>    Can not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_cr
>>    mdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpaymen
>>    t=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:29:4',
>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:507',
>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:414',
>>    'org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:140',
>>    'org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:1670',
>>    'org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:347',
>>    '*parquet.io.ParquetDecodingException:Can not read value at 0 in
>>    block -1 in file hdfs://askmehadoop/parquet1_cr
>>    mdb_crmdb_prod_vtiger_salesorder/partitioned_on_modeofpaymen
>>    t=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet:36:7',
>>    'parquet.hadoop.InternalParquetRecordReader:nextKeyValue:Int
>>    ernalParquetRecordReader.java:228', 'parquet.hadoop.ParquetRecordR
>>    eader:nextKeyValue:ParquetRecordReader.java:201', '
>>    org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordRead
>>    erWrapper:<init>:ParquetRecordReaderWrapper.java:122', '
>>    org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordRead
>>    erWrapper:<init>:ParquetRecordReaderWrapper.java:85', '
>>    org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputForm
>>    at:getRecordReader:MapredParquetInputFormat.java:72',
>>    'org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputForm
>>    atSplit:getRecordReader:FetchOperator.java:673',
>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getRecordReader:FetchOperator.java:323',
>>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:445',
>>    '*java.lang.UnsupportedOperationException:parquet.column.
>>    values.dictionary.PlainValuesDictionary$PlainLongDictionary:47:11',
>>    'parquet.column.Dictionary:decodeToBinary:Dictionary.java:44', '
>>    org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter
>>    $BinaryConverter:setDictionary:ETypeConverter.java:227',
>>    'parquet.column.impl.ColumnReaderImpl:<init>:ColumnReaderImpl.java:339',
>>    'parquet.column.impl.ColumnReadStoreImpl:newMemColumnReader:ColumnReadStoreImpl.java:66',
>>    'parquet.column.impl.ColumnReadStoreImpl:getColumnReader:ColumnReadStoreImpl.java:61',
>>    'parquet.io.RecordReaderImplementation:<init>:RecordReaderImplementation.java:270',
>>    'parquet.io.MessageColumnIO$1:visit:MessageColumnIO.java:134',
>>    'parquet.io.MessageColumnIO$1:visit:MessageColumnIO.java:99',
>>    'parquet.filter2.compat.FilterCompat$NoOpFilter:accept:FilterCompat.java:154',
>>    'parquet.io.MessageColumnIO:getRecordReader:MessageColumnIO.java:99',
>>    'parquet.hadoop.InternalParquetRecordReader:checkRead:Intern
>>    alParquetRecordReader.java:137', 'parquet.hadoop.InternalParque
>>    tRecordReader:nextKeyValue:InternalParquetRecordReader.java:208'],
>>    statusCode=3), results=None, hasMoreRows=None)
>>
>>
>> this is the schema :
>>
>> approve_status int
>> 2 base_product_id bigint
>> 3 bazaar_date_from timestamp
>> 4 bazaar_date_to timestamp
>> 5 bazaar_price string
>> 6 bulk_order int
>> 7 campaign_flag int
>> 8 campaign_from timestamp
>> 9 campaign_to timestamp
>> 10 checkout_url string
>> 11 created_date timestamp
>> 12 cst_per double
>> 13 dead_weight double
>> 14 delivery string
>> 15 delivery_timeline int
>> 16 dispatch_sla int
>> 17 getit_subscribed_product_id bigint
>> 18 height string
>> 19 is_cod int
>> 20 is_deleted int
>> 21 is_fc_inventory int
>> 22 length string
>> 23 modified_date timestamp
>> 24 online_status string
>> 25 product_status string
>> 26 prompt int
>> 27 prompt_key string
>> 28 quantity int
>> 29 sku string
>> 30 split_count int
>> 31 status boolean
>> 32 store_id bigint
>> 33 store_offer_price string
>> 34 store_price string
>> 35 subscribed_product_id bigint
>> 36 subscribe_shipping_charge int
>> 37 transfer_price string
>> 38 vat_per double
>> 39 volumetric_weight double
>> 40 warranty string
>> 41 weight string
>> 42 width string
>> 43 partitioned_on_product_status string
>> 44 NULL NULL
>> 45 # Partition Information NULL NULL
>> 46 # col_name             data_type            comment
>> 47 NULL NULL
>> 48 partitioned_on_product_status string
>>
>> Kindly let me know what the trouble could be. I am a bit sceptical about
>> the data types but the other tables wherein I have no such exception use
>> all timestamp bigint etc  as datatypes for columns.
>>
>> Thanks,
>> Sumit
>>
>
>

Re: parquet : Can not read value at 0 in block -1

Posted by Sumit Khanna <su...@askme.in>.
However, I just saw that the data samples can be viewed fine in Hue UI
"sample". So I am afraid is it even a parquet error or something hive is
failing at.

Thanks,

On Sat, Aug 6, 2016 at 10:06 AM, Sumit Khanna <su...@askme.in> wrote:

> Hello,
>
> I have a hunch that I am trying to mirror into HDFS (parquet format) into
> a schema with datatypes not supported / conflicting with parquet.
>
> has anyone before experienced any issues with parquet and spark?
>
> this is the error trace on firing a simple select * :
>
>
>    - Bad status for request TFetchResultsReq(fetchType=0, operationHandle=
>    TOperationHandle(hasResultSet=True, modifiedRowCount=None,
>    operationType=0, operationId=THandleIdentifier(
>    secret='b=\x92\x0f\xc86E\xcc\xae\xf6*\xba|L;\xf3',
>    guid='\xe0\x96\xac\x1aqz@G\x99Q\x16\x86\x90;0 ')), orientation=4,
>    maxRows=100): TFetchResultsResp(status=TStatus(errorCode=0,
>    errorMessage='java.io.IOException: parquet.io.ParquetDecodingException:
>    Can not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_
>    crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_
>    modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-9b9c-ec29724afe7b.gz.parquet',
>    sqlState=None, infoMessages=['*org.apache.hive.service.cli.
>    HiveSQLException:java.io.IOException: parquet.io.ParquetDecodingException:
>    Can not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_
>    crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_
>    modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-
>    9b9c-ec29724afe7b.gz.parquet:25:24', 'org.apache.hive.service.cli.
>    operation.SQLOperation:getNextRowSet:SQLOperation.java:352',
>    'org.apache.hive.service.cli.operation.OperationManager:
>    getOperationNextRowSet:OperationManager.java:220',
>    'org.apache.hive.service.cli.session.HiveSessionImpl:
>    fetchResults:HiveSessionImpl.java:685', 'sun.reflect.
>    GeneratedMethodAccessor63:invoke::-1', 'sun.reflect.
>    DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43',
>    'java.lang.reflect.Method:invoke:Method.java:498',
>    'org.apache.hive.service.cli.session.HiveSessionProxy:
>    invoke:HiveSessionProxy.java:78', 'org.apache.hive.service.cli.
>    session.HiveSessionProxy:access$000:HiveSessionProxy.java:36',
>    'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63',
>    'java.security.AccessController:doPrivileged:AccessController.java:-2',
>    'javax.security.auth.Subject:doAs:Subject.java:422',
>    'org.apache.hadoop.security.UserGroupInformation:doAs:
>    UserGroupInformation.java:1657', 'org.apache.hive.service.cli.
>    session.HiveSessionProxy:invoke:HiveSessionProxy.java:59',
>    'com.sun.proxy.$Proxy22:fetchResults::-1',
>    'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:454',
>    'org.apache.hive.service.cli.thrift.ThriftCLIService:
>    FetchResults:ThriftCLIService.java:672', 'org.apache.hive.service.cli.
>    thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553',
>    'org.apache.hive.service.cli.thrift.TCLIService$Processor$
>    FetchResults:getResult:TCLIService.java:1538', 'org.apache.thrift.
>    ProcessFunction:process:ProcessFunction.java:39', 'org.apache.thrift.
>    TBaseProcessor:process:TBaseProcessor.java:39',
>    'org.apache.hive.service.auth.TSetIpAddressProcessor:process:
>    TSetIpAddressProcessor.java:56', 'org.apache.thrift.server.
>    TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
>    'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142',
>    'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617',
>    'java.lang.Thread:run:Thread.java:745', '*java.io.IOException:parquet.io.ParquetDecodingException:
>    Can not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_
>    crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_
>    modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-
>    9b9c-ec29724afe7b.gz.parquet:29:4', 'org.apache.hadoop.hive.ql.
>    exec.FetchOperator:getNextRow:FetchOperator.java:507',
>    'org.apache.hadoop.hive.ql.exec.FetchOperator:pushRow:FetchOperator.java:414',
>    'org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:140',
>    'org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:1670',
>    'org.apache.hive.service.cli.operation.SQLOperation:
>    getNextRowSet:SQLOperation.java:347', '*parquet.io.ParquetDecodingException:Can
>    not read value at 0 in block -1 in file hdfs://askmehadoop/parquet1_
>    crmdb_crmdb_prod_vtiger_salesorder/partitioned_on_
>    modeofpayment=Pay_Later/part-r-00000-63373292-8473-47dc-
>    9b9c-ec29724afe7b.gz.parquet:36:7', 'parquet.hadoop.
>    InternalParquetRecordReader:nextKeyValue:InternalParquetRecordReader.java:228',
>    'parquet.hadoop.ParquetRecordReader:nextKeyValue:ParquetRecordReader.java:201',
>    'org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper:<
>    init>:ParquetRecordReaderWrapper.java:122', '
>    org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper:<
>    init>:ParquetRecordReaderWrapper.java:85', '
>    org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat:
>    getRecordReader:MapredParquetInputFormat.java:72',
>    'org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit:
>    getRecordReader:FetchOperator.java:673', 'org.apache.hadoop.hive.ql.
>    exec.FetchOperator:getRecordReader:FetchOperator.java:323',
>    'org.apache.hadoop.hive.ql.exec.FetchOperator:getNextRow:FetchOperator.java:445',
>    '*java.lang.UnsupportedOperationException:parquet.column.values.
>    dictionary.PlainValuesDictionary$PlainLongDictionary:47:11',
>    'parquet.column.Dictionary:decodeToBinary:Dictionary.java:44', '
>    org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter$
>    BinaryConverter:setDictionary:ETypeConverter.java:227',
>    'parquet.column.impl.ColumnReaderImpl:<init>:ColumnReaderImpl.java:339',
>    'parquet.column.impl.ColumnReadStoreImpl:newMemColumnReader:ColumnReadStoreImpl.java:66',
>    'parquet.column.impl.ColumnReadStoreImpl:getColumnReader:ColumnReadStoreImpl.java:61',
>    'parquet.io.RecordReaderImplementation:<init>:
>    RecordReaderImplementation.java:270', 'parquet.io.MessageColumnIO$1:
>    visit:MessageColumnIO.java:134', 'parquet.io.MessageColumnIO$1:
>    visit:MessageColumnIO.java:99', 'parquet.filter2.compat.
>    FilterCompat$NoOpFilter:accept:FilterCompat.java:154',
>    'parquet.io.MessageColumnIO:getRecordReader:MessageColumnIO.java:99',
>    'parquet.hadoop.InternalParquetRecordReader:checkRead:
>    InternalParquetRecordReader.java:137', 'parquet.hadoop.
>    InternalParquetRecordReader:nextKeyValue:InternalParquetRecordReader.java:208'],
>    statusCode=3), results=None, hasMoreRows=None)
>
>
> this is the schema :
>
> approve_status int
> 2 base_product_id bigint
> 3 bazaar_date_from timestamp
> 4 bazaar_date_to timestamp
> 5 bazaar_price string
> 6 bulk_order int
> 7 campaign_flag int
> 8 campaign_from timestamp
> 9 campaign_to timestamp
> 10 checkout_url string
> 11 created_date timestamp
> 12 cst_per double
> 13 dead_weight double
> 14 delivery string
> 15 delivery_timeline int
> 16 dispatch_sla int
> 17 getit_subscribed_product_id bigint
> 18 height string
> 19 is_cod int
> 20 is_deleted int
> 21 is_fc_inventory int
> 22 length string
> 23 modified_date timestamp
> 24 online_status string
> 25 product_status string
> 26 prompt int
> 27 prompt_key string
> 28 quantity int
> 29 sku string
> 30 split_count int
> 31 status boolean
> 32 store_id bigint
> 33 store_offer_price string
> 34 store_price string
> 35 subscribed_product_id bigint
> 36 subscribe_shipping_charge int
> 37 transfer_price string
> 38 vat_per double
> 39 volumetric_weight double
> 40 warranty string
> 41 weight string
> 42 width string
> 43 partitioned_on_product_status string
> 44 NULL NULL
> 45 # Partition Information NULL NULL
> 46 # col_name             data_type            comment
> 47 NULL NULL
> 48 partitioned_on_product_status string
>
> Kindly let me know what the trouble could be. I am a bit sceptical about
> the data types but the other tables wherein I have no such exception use
> all timestamp bigint etc  as datatypes for columns.
>
> Thanks,
> Sumit
>