You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@asterixdb.apache.org by "Steven Jacobs (JIRA)" <ji...@apache.org> on 2017/09/18 19:49:00 UTC

[jira] [Comment Edited] (ASTERIXDB-2101) Record Merge Error when running job

    [ https://issues.apache.org/jira/browse/ASTERIXDB-2101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170285#comment-16170285 ] 

Steven Jacobs edited comment on ASTERIXDB-2101 at 9/18/17 7:48 PM:
-------------------------------------------------------------------

INFO: Optimized Plan:
commit
-- COMMIT  |PARTITIONED|
  project ([$$6])
  -- STREAM_PROJECT  |PARTITIONED|
    exchange
    -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
      insert into channels.roomRecordsResults from record: $$8 partitioned by [$$6]
      -- INSERT_DELETE  |PARTITIONED|
        exchange
        -- HASH_PARTITION_EXCHANGE [$$6]  |PARTITIONED|
          assign [$$6] <- [$$8.getField(0)]
          -- ASSIGN  |PARTITIONED|
            project ([$$8])
            -- STREAM_PROJECT  |PARTITIONED|
              assign [$$8] <- [cast($$7)]
              -- ASSIGN  |PARTITIONED|
                project ([$$7])
                -- STREAM_PROJECT  |PARTITIONED|
                  assign [$$7] <- [check-unknown(object-merge($$4, {"id": create-uuid()}))]
                  -- ASSIGN  |PARTITIONED|
                    project ([$$4])
                    -- STREAM_PROJECT  |PARTITIONED|
                      assign [$$4] <- [{"subscriptionId": $$9}]
                      -- ASSIGN  |PARTITIONED|
                        project ([$$9])
                        -- STREAM_PROJECT  |PARTITIONED|
                          exchange
                          -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                            data-scan []<-[$$9, $$sub] <- channels.roomRecordsSubscriptions
                            -- DATASOURCE_SCAN  |PARTITIONED|
                              exchange
                              -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                empty-tuple-source
                                -- EMPTY_TUPLE_SOURCE  |PARTITIONED|


was (Author: sjaco002):
Here is the generated plan:
distribute result [$$30]
-- DISTRIBUTE_RESULT  |PARTITIONED|
  exchange
  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
    project ([$$30])
    -- STREAM_PROJECT  |PARTITIONED|
      commit
      -- COMMIT  |PARTITIONED|
        project ([$$28, $$30])
        -- STREAM_PROJECT  |PARTITIONED|
          exchange
          -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
            insert into channels.roomRecordsResults from record: $$31 partitioned by [$$28]
            -- INSERT_DELETE  |PARTITIONED|
              exchange
              -- HASH_PARTITION_EXCHANGE [$$28]  |PARTITIONED|
                assign [$$28] <- [$$31.getField(0)]
                -- ASSIGN  |PARTITIONED|
                  assign [$$31] <- [cast($$30)]
                  -- ASSIGN  |PARTITIONED|
                    project ([$$30])
                    -- STREAM_PROJECT  |PARTITIONED|
                      assign [$$30] <- [check-unknown(object-merge($$26, {"id": create-uuid()}))]
                      -- ASSIGN  |PARTITIONED|
                        project ([$$26])
                        -- STREAM_PROJECT  |PARTITIONED|
                          assign [$$26] <- [{"result": {"userId": $$35}, "channelExecutionTime": $$channelExecutionTime, "subscriptionId": $$32, "deliveryTime": current-datetime()}]
                          -- ASSIGN  |PARTITIONED|
                            project ([$$channelExecutionTime, $$32, $$35])
                            -- STREAM_PROJECT  |PARTITIONED|
                              exchange
                              -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                join (eq($$40, $$41))
                                -- HYBRID_HASH_JOIN [$$41][$$40]  |PARTITIONED|
                                  exchange
                                  -- HASH_PARTITION_EXCHANGE [$$41]  |PARTITIONED|
                                    project ([$$channelExecutionTime, $$32, $$41])
                                    -- STREAM_PROJECT  |PARTITIONED|
                                      exchange
                                      -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                        join (and(eq($$33, $$39), eq($$34, $$37)))
                                        -- HYBRID_HASH_JOIN [$$39, $$37][$$33, $$34]  |PARTITIONED|
                                          exchange
                                          -- HASH_PARTITION_EXCHANGE [$$39, $$37]  |PARTITIONED|
                                            project ([$$channelExecutionTime, $$32, $$37, $$39, $$41])
                                            -- STREAM_PROJECT  |PARTITIONED|
                                              assign [$$41, $$39, $$37] <- [$$sub.getField(1), $$sub.getField("DataverseName"), $$sub.getField("BrokerName")]
                                              -- ASSIGN  |PARTITIONED|
                                                exchange
                                                -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                                  data-scan []<-[$$32, $$sub] <- channels.roomRecordsSubscriptions
                                                  -- DATASOURCE_SCAN  |PARTITIONED|
                                                    exchange
                                                    -- BROADCAST_EXCHANGE  |PARTITIONED|
                                                      assign [$$channelExecutionTime] <- [current-datetime()]
                                                      -- ASSIGN  |UNPARTITIONED|
                                                        empty-tuple-source
                                                        -- EMPTY_TUPLE_SOURCE  |UNPARTITIONED|
                                          exchange
                                          -- HASH_PARTITION_EXCHANGE [$$33, $$34]  |PARTITIONED|
                                            project ([$$33, $$34])
                                            -- STREAM_PROJECT  |PARTITIONED|
                                              exchange
                                              -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                                data-scan []<-[$$33, $$34, $$b] <- Metadata.Datatype
                                                -- DATASOURCE_SCAN  |PARTITIONED|
                                                  exchange
                                                  -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                                    empty-tuple-source
                                                    -- EMPTY_TUPLE_SOURCE  |PARTITIONED|
                                  exchange
                                  -- HASH_PARTITION_EXCHANGE [$$40]  |PARTITIONED|
                                    project ([$$35, $$40])
                                    -- STREAM_PROJECT  |PARTITIONED|
                                      assign [$$40] <- [$$location.getField(1)]
                                      -- ASSIGN  |PARTITIONED|
                                        exchange
                                        -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                          data-scan []<-[$$35, $$location] <- channels.UserLocations
                                          -- DATASOURCE_SCAN  |PARTITIONED|
                                            exchange
                                            -- ONE_TO_ONE_EXCHANGE  |PARTITIONED|
                                              empty-tuple-source
                                              -- EMPTY_TUPLE_SOURCE  |PARTITIONED|

> Record Merge Error when running job
> -----------------------------------
>
>                 Key: ASTERIXDB-2101
>                 URL: https://issues.apache.org/jira/browse/ASTERIXDB-2101
>             Project: Apache AsterixDB
>          Issue Type: Bug
>            Reporter: Steven Jacobs
>
> The following will throw a runtime exception:
> drop dataverse channels if exists;
> create dataverse channels;
> use channels;
> create type result as {
>   id:uuid
> };
> create type subscriptionType as {
>   subscriptionId:uuid,
>   param0:int
> };
> create dataset roomRecordsResults(result)
> primary key id autogenerated;
> create dataset roomRecordsSubscriptions(subscriptionType)
> primary key subscriptionId autogenerated;
> use channels;
> insert into channels.roomRecordsResults (
>   select sub.subscriptionId
>   from channels.roomRecordsSubscriptions sub
> );
> Here is the stack trace:
> WARNING: Unhandled throwable
> java.lang.VerifyError: Bad return type
> Exception Details:
>   Location:
>     org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen.access$0(Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen;)Lorg/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor; @4: areturn
>   Reason:
>     Type 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen' (current frame, stack[0]) is not assignable to 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor' (from method signature)
>   Current Frame:
>     bci: @4
>     flags: { }
>     locals: { 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_EvaluatorFactoryGen' }
>     stack: { 'org/apache/asterix/runtime/evaluators/functions/records/RecordMergeDescriptor$_Gen' }
>   Bytecode:
>     0x0000000: 2ab4 0063 b0                           
> 	at org.apache.asterix.runtime.evaluators.functions.records.RecordMergeDescriptor$_Gen.createEvaluatorFactory(RecordMergeDescriptor.java:86)
> 	at org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:144)
> 	at org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
> 	at org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.codegenArguments(QueryLogicalExpressionJobGen.java:161)
> 	at org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createScalarFunctionEvaluatorFactory(QueryLogicalExpressionJobGen.java:134)
> 	at org.apache.asterix.jobgen.QueryLogicalExpressionJobGen.createEvaluatorFactory(QueryLogicalExpressionJobGen.java:109)
> 	at org.apache.hyracks.algebricks.core.algebra.expressions.ExpressionRuntimeProvider.createEvaluatorFactory(ExpressionRuntimeProvider.java:41)
> 	at org.apache.hyracks.algebricks.core.algebra.operators.physical.AssignPOperator.contributeRuntimeOperator(AssignPOperator.java:84)
> 	at org.apache.hyracks.algebricks.core.algebra.operators.logical.AbstractLogicalOperator.contributeRuntimeOperator(AbstractLogicalOperator.java:166)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:97)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compileOpRef(PlanCompiler.java:84)
> 	at org.apache.hyracks.algebricks.core.jobgen.impl.PlanCompiler.compilePlan(PlanCompiler.java:60)
> 	at org.apache.hyracks.algebricks.compiler.api.HeuristicCompilerFactoryBuilder$1$1.createJob(HeuristicCompilerFactoryBuilder.java:107)
> 	at org.apache.asterix.api.common.APIFramework.compileQuery(APIFramework.java:333)
> 	at org.apache.asterix.app.translator.QueryTranslator.rewriteCompileInsertUpsert(QueryTranslator.java:1867)
> 	at org.apache.asterix.app.translator.QueryTranslator.lambda$0(QueryTranslator.java:1755)
> 	at org.apache.asterix.app.translator.QueryTranslator.handleInsertUpsertStatement(QueryTranslator.java:1781)
> 	at org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:337)
> 	at org.apache.asterix.app.translator.QueryTranslator.compileAndExecute(QueryTranslator.java:254)
> 	at org.apache.asterix.api.http.server.ApiServlet.post(ApiServlet.java:157)
> 	at org.apache.hyracks.http.server.AbstractServlet.handle(AbstractServlet.java:78)
> 	at org.apache.hyracks.http.server.HttpRequestHandler.handle(HttpRequestHandler.java:70)
> 	at org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:55)
> 	at org.apache.hyracks.http.server.HttpRequestHandler.call(HttpRequestHandler.java:36)
> 	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)