You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@carbondata.apache.org by "Jatin (JIRA)" <ji...@apache.org> on 2018/07/19 09:02:00 UTC

[jira] [Updated] (CARBONDATA-2758) selection on local dictionary fails when column having all null values more than default batch size.

     [ https://issues.apache.org/jira/browse/CARBONDATA-2758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jatin updated CARBONDATA-2758:
------------------------------
    Description: 
ArrayIndexOutOfBound throws on following command.

1. create table t1(s1 int,s2 string,s3 string) stored by 'carbondata' TBLPROPERTIES('SORT_SCOPE'='BATCH_SORT')

2. load from a csv having all null values alteast 4097 rows

or 

insert into t1 select cast(null as int),cast(null as string),cast(null as string) 5000 times

3. select * from t1;

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 207, BLR1000014267, executor 1): java.lang.ArrayIndexOutOfBoundsException: 4096
  at org.apache.carbondata.spark.vectorreader.ColumnarVectorWrapper.putNull(ColumnarVectorWrapper.java:181)
  at org.apache.carbondata.core.datastore.chunk.store.impl.LocalDictDimensionDataChunkStore.fillRow(LocalDictDimensionDataChunkStore.java:63)
  at org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionColumnPage.fillVector(VariableLengthDimensionColumnPage.java:117)
  at org.apache.carbondata.core.scan.result.BlockletScannedResult.fillColumnarNoDictionaryBatch(BlockletScannedResult.java:260)
  at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.fillResultToColumnarBatch(DictionaryBasedVectorResultCollector.java:166)
  at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.collectResultInColumnarBatch(DictionaryBasedVectorResultCollector.java:157)
  at org.apache.carbondata.core.scan.processor.DataBlockIterator.processNextBatch(DataBlockIterator.java:245)
  at org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:48)
  at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:307)
  at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:182)
  at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:497)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
  at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
  at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
  at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:381)
  at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
  at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)

  was:
ArrayIndexOutOfBound throws on following command.

1. create table t1(s1 int,s2 string,s3 string) stored by 'carbondata' TBLPROPERTIES('SORT_SCOPE'='BATCH_SORT')

2. load from a csv having all null values 

or 

insert into t1 select cast(null as int),cast(null as string),cast(null as string) 5000 times

3. select * from t1;

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 207, BLR1000014267, executor 1): java.lang.ArrayIndexOutOfBoundsException: 4096
 at org.apache.carbondata.spark.vectorreader.ColumnarVectorWrapper.putNull(ColumnarVectorWrapper.java:181)
 at org.apache.carbondata.core.datastore.chunk.store.impl.LocalDictDimensionDataChunkStore.fillRow(LocalDictDimensionDataChunkStore.java:63)
 at org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionColumnPage.fillVector(VariableLengthDimensionColumnPage.java:117)
 at org.apache.carbondata.core.scan.result.BlockletScannedResult.fillColumnarNoDictionaryBatch(BlockletScannedResult.java:260)
 at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.fillResultToColumnarBatch(DictionaryBasedVectorResultCollector.java:166)
 at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.collectResultInColumnarBatch(DictionaryBasedVectorResultCollector.java:157)
 at org.apache.carbondata.core.scan.processor.DataBlockIterator.processNextBatch(DataBlockIterator.java:245)
 at org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:48)
 at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:307)
 at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:182)
 at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:497)
 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
 at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
 at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:381)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
 at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)


> selection on local dictionary fails when column having all null values more than default batch size.
> ----------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-2758
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-2758
>             Project: CarbonData
>          Issue Type: Bug
>          Components: spark-integration
>    Affects Versions: 1.5.0
>         Environment: 3 node cluster having spark-2.2
>            Reporter: Jatin
>            Assignee: Jatin
>            Priority: Minor
>             Fix For: 1.5.0
>
>
> ArrayIndexOutOfBound throws on following command.
> 1. create table t1(s1 int,s2 string,s3 string) stored by 'carbondata' TBLPROPERTIES('SORT_SCOPE'='BATCH_SORT')
> 2. load from a csv having all null values alteast 4097 rows
> or 
> insert into t1 select cast(null as int),cast(null as string),cast(null as string) 5000 times
> 3. select * from t1;
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 6.0 failed 4 times, most recent failure: Lost task 0.3 in stage 6.0 (TID 207, BLR1000014267, executor 1): java.lang.ArrayIndexOutOfBoundsException: 4096
>   at org.apache.carbondata.spark.vectorreader.ColumnarVectorWrapper.putNull(ColumnarVectorWrapper.java:181)
>   at org.apache.carbondata.core.datastore.chunk.store.impl.LocalDictDimensionDataChunkStore.fillRow(LocalDictDimensionDataChunkStore.java:63)
>   at org.apache.carbondata.core.datastore.chunk.impl.VariableLengthDimensionColumnPage.fillVector(VariableLengthDimensionColumnPage.java:117)
>   at org.apache.carbondata.core.scan.result.BlockletScannedResult.fillColumnarNoDictionaryBatch(BlockletScannedResult.java:260)
>   at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.fillResultToColumnarBatch(DictionaryBasedVectorResultCollector.java:166)
>   at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.collectResultInColumnarBatch(DictionaryBasedVectorResultCollector.java:157)
>   at org.apache.carbondata.core.scan.processor.DataBlockIterator.processNextBatch(DataBlockIterator.java:245)
>   at org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:48)
>   at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:307)
>   at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:182)
>   at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:497)
>   at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.scan_nextBatch$(Unknown Source)
>   at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithKeys$(Unknown Source)
>   at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
>   at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>   at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:381)
>   at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
>   at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:126)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)