You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@druid.apache.org by GitBox <gi...@apache.org> on 2021/12/02 09:46:31 UTC

[GitHub] [druid] wangxiaobaidu11 commented on pull request #10920: Spark Direct Readers and Writers for Druid.

wangxiaobaidu11 commented on pull request #10920:
URL: https://github.com/apache/druid/pull/10920#issuecomment-984461681


   hi @JulianJaffePinterest, I hope you can get out of your sad mood soon and take good care of yourself!    Last month I tested your code.  I met a problem described below:
   `21/12/02 15:19:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
   noc.com.fasterxml.jackson.databind.exc.InvalidTypeIdException: Could not resolve type id 'thetaSketch' as a subtype of `org.apache.druid.query.aggregation.AggregatorFactory`: known type ids = [cardinality, count, doubleAny, doubleFirst, doubleLast, doubleMax, doubleMean, doubleMin, doubleSum, filtered, floatAny, floatFirst, floatLast, floatMax, floatMin, floatSum, grouping, histogram, hyperUnique, javascript, longAny, longFirst, longLast, longMax, longMin, longSum, stringAny, stringFirst, stringFirstFold, stringLast, stringLastFold]
    at [Source: (String)"[
     { "type": "count", "name": "count" },
     { "type": "longSum", "name": "sum_metric1", "fieldName": "sum_metric1" },
     { "type": "longSum", "name": "sum_metric2", "fieldName": "sum_metric2" },
     { "type": "doubleSum", "name": "sum_metric3", "fieldName": "sum_metric3" },
     { "type": "floatSum", "name": "sum_metric4", "fieldName": "sum_metric4" }, 
     { "type": "thetaSketch", "name": "uniq_id1", "fieldName": "uniq_id1", "isInputThetaSketch": true }
   ]"; line: 7, column: 13] (through reference chain: java.lang.Object[][5])
           at noc.com.fasterxml.jackson.databind.exc.InvalidTypeIdException.from(InvalidTypeIdException.java:43)
           at noc.com.fasterxml.jackson.databind.DeserializationContext.invalidTypeIdException(DeserializationContext.java:1761)
           at noc.com.fasterxml.jackson.databind.DeserializationContext.handleUnknownTypeId(DeserializationContext.java:1268)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._handleUnknownTypeId(TypeDeserializerBase.java:290)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._findDeserializer(TypeDeserializerBase.java:162)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:113)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:97)
           at noc.com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:254)
           at noc.com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:197)
           at noc.com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:21)
           at noc.com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4218)
           at noc.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3214)
           at noc.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3197)
           at org.apache.druid.spark.v2.writer.DruidDataWriterFactory$.createDataSchemaFromConfiguration(DruidDataWriterFactory.scala:99)
           at org.apache.druid.spark.v2.writer.DruidDataWriterFactory.createDataWriter(DruidDataWriterFactory.scala:70)
           at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:113)
           at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.$anonfun$doExecute$2(WriteToDataSourceV2Exec.scala:67)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:121)
           at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   21/12/02 15:19:40 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): noc.com.fasterxml.jackson.databind.exc.InvalidTypeIdException: Could not resolve type id 'thetaSketch' as a subtype of `org.apache.druid.query.aggregation.AggregatorFactory`: known type ids = [cardinality, count, doubleAny, doubleFirst, doubleLast, doubleMax, doubleMean, doubleMin, doubleSum, filtered, floatAny, floatFirst, floatLast, floatMax, floatMin, floatSum, grouping, histogram, hyperUnique, javascript, longAny, longFirst, longLast, longMax, longMin, longSum, stringAny, stringFirst, stringFirstFold, stringLast, stringLastFold]
    at [Source: (String)"[
     { "type": "count", "name": "count" },
     { "type": "longSum", "name": "sum_metric1", "fieldName": "sum_metric1" },
     { "type": "longSum", "name": "sum_metric2", "fieldName": "sum_metric2" },
     { "type": "doubleSum", "name": "sum_metric3", "fieldName": "sum_metric3" },
     { "type": "floatSum", "name": "sum_metric4", "fieldName": "sum_metric4" }, 
     { "type": "thetaSketch", "name": "uniq_id1", "fieldName": "uniq_id1", "isInputThetaSketch": true }
   ]"; line: 7, column: 13] (through reference chain: java.lang.Object[][5])
           at noc.com.fasterxml.jackson.databind.exc.InvalidTypeIdException.from(InvalidTypeIdException.java:43)
           at noc.com.fasterxml.jackson.databind.DeserializationContext.invalidTypeIdException(DeserializationContext.java:1761)
           at noc.com.fasterxml.jackson.databind.DeserializationContext.handleUnknownTypeId(DeserializationContext.java:1268)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._handleUnknownTypeId(TypeDeserializerBase.java:290)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._findDeserializer(TypeDeserializerBase.java:162)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer._deserializeTypedForId(AsPropertyTypeDeserializer.java:113)
           at noc.com.fasterxml.jackson.databind.jsontype.impl.AsPropertyTypeDeserializer.deserializeTypedFromObject(AsPropertyTypeDeserializer.java:97)
           at noc.com.fasterxml.jackson.databind.deser.AbstractDeserializer.deserializeWithType(AbstractDeserializer.java:254)
           at noc.com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:197)
           at noc.com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:21)
           at noc.com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4218)
           at noc.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3214)
           at noc.com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3197)
           at org.apache.druid.spark.v2.writer.DruidDataWriterFactory$.createDataSchemaFromConfiguration(DruidDataWriterFactory.scala:99)
           at org.apache.druid.spark.v2.writer.DruidDataWriterFactory.createDataWriter(DruidDataWriterFactory.scala:70)
           at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:113)
           at org.apache.spark.sql.execution.datasources.v2.WriteToDataSourceV2Exec.$anonfun$doExecute$2(WriteToDataSourceV2Exec.scala:67)
           at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
           at org.apache.spark.scheduler.Task.run(Task.scala:121)
           at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   `
   
   
   > Hey @jihoonson, I had some unexpected and unfortunate personal/familial crises to deal with these past few months. While they're not entirely in the rear-view mirror, I should have more time again to push this to the finish line. I've opened #11823 with the next chunk of code (the reading half of the connector). Please let me know if you think the PR is still too big; I couldn't find a good place to split it that wouldn't require a reviewer to know the rest of the code anyway.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@druid.apache.org
For additional commands, e-mail: commits-help@druid.apache.org