You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2019/08/21 01:29:01 UTC

[GitHub] [incubator-druid] jon-wei opened a new issue #8348: Integer->Long ClassCastException in TopN query

jon-wei opened a new issue #8348: Integer->Long ClassCastException in TopN query
URL: https://github.com/apache/incubator-druid/issues/8348
 
 
   `TopNNumericResultBuilder` can hit a ClassCastException on the broker when a query uses a long dimension, due to Jackson behavior where a number gets deserialized as an integer instead of as a long if it fits in 32 bits.
   
   ```
   java.lang.ClassCastException: java.lang.Integer cannot be cast to java.lang.Long
   	at java.lang.Long.compareTo(Long.java:54) ~[?:1.8.0_202]
   	at org.apache.druid.query.topn.TopNNumericResultBuilder$1.compare(TopNNumericResultBuilder.java:67) ~[druid-processing-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at org.apache.druid.query.topn.TopNNumericResultBuilder$1.compare(TopNNumericResultBuilder.java:52) ~[druid-processing-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at org.apache.druid.query.topn.TopNNumericResultBuilder.lambda$new$0(TopNNumericResultBuilder.java:99) ~[druid-processing-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:670) ~[?:1.8.0_202]
   	at java.util.PriorityQueue.siftUp(PriorityQueue.java:646) ~[?:1.8.0_202]
   	at java.util.PriorityQueue.offer(PriorityQueue.java:345) ~[?:1.8.0_202]
   	at java.util.PriorityQueue.add(PriorityQueue.java:322) ~[?:1.8.0_202]
   	at org.apache.druid.query.topn.TopNNumericResultBuilder.addEntry(TopNNumericResultBuilder.java:204) ~[druid-processing-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at org.apache.druid.query.topn.TopNBinaryFn.apply(TopNBinaryFn.java:132) ~[druid-processing-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at org.apache.druid.query.topn.TopNBinaryFn.apply(TopNBinaryFn.java:39) ~[druid-processing-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at org.apache.druid.common.guava.CombiningSequence$CombiningYieldingAccumulator.accumulate(CombiningSequence.java:210) ~[druid-core-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   	at org.apache.druid.java.util.common.guava.MergeSequence.makeYielder(MergeSequence.java:104) ~[druid-core-0.16.0-incubating-SNAPSHOT.jar:0.16.0-incubating-SNAPSHOT]
   ```
   
   
   The following data and specs can be used to reproduce the issue.
   
   Data
   ```
   {"time": "2015-09-12T00:46:58.771Z", "id": 10, "val": 5}
   {"time": "2015-09-13T00:46:58.771Z", "id": 8147483647, "val": 5}
   ```
   
   Ingest spec
   ```
   {
     "type" : "index",
     "spec" : {
       "dataSchema" : {
         "dataSource" : "topnmerge",
         "parser" : {
           "type" : "string",
           "parseSpec" : {
             "format" : "json",
             "dimensionsSpec" : {
               "dimensions" : [
                 {
                   "type":"long",
                   "name":"id"
                 }
               ],
               "dimensionExclusions" : []
             },
             "timestampSpec" : {
               "format" : "auto",
               "column" : "time"
             }
           }
         },
         "metricsSpec" : [
           { "type" : "count", "name" : "count", "type" : "count" },
           { "type" : "longSum", "name" : "val", "fieldName" : "val" }
         ],
         "granularitySpec" : {
           "type" : "uniform",
           "segmentGranularity" : "day",
           "queryGranularity" : "none",
           "intervals" : ["2015-09-01/2015-09-20"],
           "rollup" : false
         }
       },
       "ioConfig" : {
         "type" : "index",
         "firehose" : {
           "type" : "local",
           "baseDir" : "quickstart/topnmerge/",
           "filter" : "data.json"
         },
         "appendToExisting" : false
       },
       "tuningConfig" : {
         "type" : "index",
         "targetPartitionSize" : null,
         "maxRowsInMemory" : 1000,
         "forceGuaranteedRollup" : true,
         "numShards": 1
       }
     }
   }
   ```
   
   Query
   ```
   {
     "queryType": "topN",
     "dataSource": {
       "type": "table",
       "name": "topnmerge"
     },
     "virtualColumns": [],
     "dimension": {
       "type": "default",
       "dimension": "id",
       "outputName": "d0",
       "outputType": "LONG"
     },
     "metric": {
       "type": "numeric",
       "metric": "a0"
     },
     "threshold": 1000,
     "intervals": {
       "type": "intervals",
       "intervals": [
         "-146136543-09-08T08:23:32.096Z/146140482-04-24T15:36:27.903Z"
       ]
     },
     "filter": null,
     "granularity": {
       "type": "all"
     },
     "aggregations": [
       {
         "type": "longSum",
         "name": "a0",
         "fieldName": "val",
         "expression": null
       }
     ],
     "postAggregations": [],
     "context": {
     },
     "descending": false
   }
   ```
   
   With the examples above, the issue can be reproduced on a quickstart cluster with 2 historicals, by putting each historical in a separate tier and setting load rules such that each tier loads only one day of data (9/12 or 9/13).

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org