You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2019/05/17 21:34:18 UTC

[GitHub] [incubator-druid] jon-wei opened a new issue #7690: Possible invalid read issue with GroupBy V2 spilled dictionaries

jon-wei opened a new issue #7690: Possible invalid read issue with GroupBy V2 spilled dictionaries
URL: https://github.com/apache/incubator-druid/issues/7690
 
 
   ### Affected Version
   
   0.13.0
   
   ### Description
   
   A user reported seeing issues when using a GroupBy V2 query: https://groups.google.com/forum/#!topic/druid-user/uM690lhVo7k
   
   ```
   2019-05-08T21:16:35,187 ERROR [qtp269685385-140[groupBy_[(redacted)]] org.apache.druid.server.QueryResource - Exception handling request: {class=org.apache.druid.server.QueryResource, exceptionType=class com.fasterxml.jackson.databind.RuntimeJsonMappingException, exceptionMessage=Can not deserialize instance of java.lang.String out of VALUE_NULL token
    at [Source: LZ4BlockInputStream(in=java.io.FileInputStream@5441412, decompressor=LZ4JNIFastDecompressor, checksum=StreamingXXHash32JNI(seed=-1756908916)); line: -1, column: 1259], exception=com.fasterxml.jackson.databind.RuntimeJsonMappingException: Cannot deserialize instance of java.lang.String out of VALUE_NULL token at [Source: LZ4BlockInputStream(in=java.io.FileInputStream@5441412, decompressor=LZ4JNIFastDecompressor, checksum=StreamingXXHash32JNI(seed=-1756908916)); line: -1, column: 1259], query=GroupByQuery{dataSource='(redacted)', querySegmentSpec=MultipleSpecificSegmentSpec{descriptors=[SegmentDescriptor{interval=2019-05-01T00:00:00.000Z/2019-05-05T00:00:00.000Z, version='2019-05-08T12:01:15.823Z', partitionNumber=0}]}, virtualColumns=[], limitSpec=NoopLimitSpec, dimFilter=((redacted)}
   com.fasterxml.jackson.databind.RuntimeJsonMappingException: Can not deserialize instance of java.lang.String out of VALUE_NULL token
    at [Source: LZ4BlockInputStream(in=java.io.FileInputStream@5441412, decompressor=LZ4JNIFastDecompressor, checksum=StreamingXXHash32JNI(seed=-1756908916)); line: -1, column: 1259]
           at com.fasterxml.jackson.databind.MappingIterator.next(MappingIterator.java:194) ~[jackson-databind-2.6.7.jar:2.6.7]
           at org.apache.druid.query.groupby.epinephelinae.SpillingGrouper.mergeAndGetDictionary(SpillingGrouper.java:223) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.tryMergeDictionary(ConcurrentGrouper.java:392) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.ConcurrentGrouper.iterator(ConcurrentGrouper.java:320) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.CloseableGrouperIterator.<init>(CloseableGrouperIterator.java:44) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:426) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.RowBasedGrouperHelper.makeGrouperIterator(RowBasedGrouperHelper.java:414) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunnerV2$1.make(GroupByMergingQueryRunnerV2.java:282) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.query.groupby.epinephelinae.GroupByMergingQueryRunnerV2$1.make(GroupByMergingQueryRunnerV2.java:158) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.java.util.common.guava.BaseSequence.toYielder(BaseSequence.java:64) ~[java-util-0.13.0-incubating.jar:0.13.0-incubating]
           at org.apache.druid.common.guava.CombiningSequence.toYielder(CombiningSequence.java:80) ~[druid-common-0.13.0-incubating.jar:0.13.0-incubating]
   ```
   
   From the stack trace, it looks like invalid data is possibly being read/written from the on-disk spilled dictionaries.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org