You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@drill.apache.org by "Chun Chang (JIRA)" <ji...@apache.org> on 2015/04/28 03:09:07 UTC

[jira] [Closed] (DRILL-1894) Complex JSON cause NPE

     [ https://issues.apache.org/jira/browse/DRILL-1894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chun Chang closed DRILL-1894.
-----------------------------
    Assignee: Chun Chang  (was: Mehant Baid)

already verified. test case is complex111.q

> Complex JSON cause NPE
> ----------------------
>
>                 Key: DRILL-1894
>                 URL: https://issues.apache.org/jira/browse/DRILL-1894
>             Project: Apache Drill
>          Issue Type: Bug
>          Components: Execution - Data Types
>    Affects Versions: 0.7.0
>            Reporter: Chun Chang
>            Assignee: Chun Chang
>             Fix For: 0.9.0
>
>
> #Tue Dec 16 13:28:01 EST 2014
> git.commit.id.abbrev=3b0ff5d
> Have the following JSON record (actual dataset too big):
> {code}
> {
>     "id": 2,
>     "oooa": {
>         "oa": {
>             "oab": {
>                 "oabc": [
>                     {
>                         "rowId": 2
>                     },
>                     {
>                         "rowValue1": 2,
>                         "rowValue2": 2
>                     }
>                 ]
>             }
>         }
>     }
> }
> {code}
> The following query caused NPE:
> {code}
> SELECT   t.id, 
>          t.oooa.oa.oab.oabc, 
>          t.oooa.oa.oab.oabc[1].rowvalue2 
> FROM     `complex.json` t 
> ORDER BY t.oooa.oa.oab.oabc[1].rowvalue2 limit 50;
> {code}
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> select t.id, t.oooa.oa.oab.oabc, t.oooa.oa.oab.oabc[1].rowValue2 from `complex.json` t order by t.oooa.oa.oab.oabc[1].rowValue2 limit 50;
> Query failed: Query failed: Failure while running fragment.[ 8a2ee7e8-8c7b-4881-883e-7924884a0878 on qa-node117.qa.lab:31010 ]
> [ 8a2ee7e8-8c7b-4881-883e-7924884a0878 on qa-node117.qa.lab:31010 ]
> Error: exception while executing query: Failure while executing query. (state=,code=0)
> {code}
> stack trace:
> {code}
> 2014-12-18 15:11:54,916 [2b6ca0c5-5b7d-3832-091f-67b37d4e3e6c:frag:1:2] WARN  o.a.d.e.w.fragment.FragmentExecutor - Error while initializing or executing fragment
> java.lang.NullPointerException: null
> 2014-12-18 15:11:54,916 [2b6ca0c5-5b7d-3832-091f-67b37d4e3e6c:frag:1:2] ERROR o.a.drill.exec.ops.FragmentContext - Fragment Context received failure.
> java.lang.NullPointerException: null
> 2014-12-18 15:11:54,916 [2b6ca0c5-5b7d-3832-091f-67b37d4e3e6c:frag:1:2] ERROR o.a.d.e.w.f.AbstractStatusReporter - Error 798d65b7-9cfb-4276-a50a-e9bae311a7ec: Failure while running fragment.
> java.lang.NullPointerException: null
> 2014-12-18 15:11:54,920 [2b6ca0c5-5b7d-3832-091f-67b37d4e3e6c:frag:2:0] ERROR o.a.d.e.p.i.p.StatusHandler - Failure while sending data to user.
> org.apache.drill.exec.rpc.RpcException: Interrupted while trying to get sending semaphore.
> 	at org.apache.drill.exec.rpc.data.DataTunnel.sendRecordBatch(DataTunnel.java:52) [drill-java-exec-0.7.0-SNAPSHOT-rebuffed.jar:0.7.0-SNAPSHOT]
> 	at org.apache.drill.exec.test.generated.PartitionerGen609$OutgoingRecordBatch.flush(PartitionerTemplate.java:320) [na:na]
> 	at org.apache.drill.exec.test.generated.PartitionerGen609.flushOutgoingBatches(PartitionerTemplate.java:134) [na:na]
> 	at org.apache.drill.exec.physical.impl.partitionsender.PartitionSenderRootExec.innerNext(PartitionSenderRootExec.java:176) [drill-java-exec-0.7.0-SNAPSHOT-rebuffed.jar:0.7.0-SNAPSHOT]
> 	at org.apache.drill.exec.physical.impl.BaseRootExec.next(BaseRootExec.java:57) [drill-java-exec-0.7.0-SNAPSHOT-rebuffed.jar:0.7.0-SNAPSHOT]
> 	at org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:114) [drill-java-exec-0.7.0-SNAPSHOT-rebuffed.jar:0.7.0-SNAPSHOT]
> 	at org.apache.drill.exec.work.WorkManager$RunnableWrapper.run(WorkManager.java:254) [drill-java-exec-0.7.0-SNAPSHOT-rebuffed.jar:0.7.0-SNAPSHOT]
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_45]
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_45]
> 	at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
> Caused by: java.lang.InterruptedException: null
> 	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1301) ~[na:1.7.0_45]
> 	at java.util.concurrent.Semaphore.acquire(Semaphore.java:317) ~[na:1.7.0_45]
> 	at org.apache.drill.exec.rpc.data.DataTunnel.sendRecordBatch(DataTunnel.java:49) [drill-java-exec-0.7.0-SNAPSHOT-rebuffed.jar:0.7.0-SNAPSHOT]
> 	... 9 common frames omitted
> {code}
> physical plan:
> {code}
> 0: jdbc:drill:schema=dfs.drillTestDirComplexJ> explain plan for select t.id, t.oooa.oa.oab.oabc, t.oooa.oa.oab.oabc[1].rowValue2 from `complex.json` t order by t.oooa.oa.oab.oabc[1].rowValue2 limit 50;
> +------------+------------+
> |    text    |    json    |
> +------------+------------+
> | 00-00    Screen
> 00-01      Project(id=[$0], EXPR$1=[$1], EXPR$2=[$2])
> 00-02        SelectionVectorRemover
> 00-03          Limit(fetch=[50])
> 00-04            SingleMergeExchange(sort0=[2 ASC])
> 01-01              SelectionVectorRemover
> 01-02                TopN(limit=[50])
> 01-03                  HashToRandomExchange(dist0=[[$2]])
> 02-01                    Project(id=[$1], EXPR$1=[ITEM(ITEM(ITEM($0, 'oa'), 'oab'), 'oabc')], EXPR$2=[ITEM(ITEM(ITEM(ITEM(ITEM($0, 'oa'), 'oab'), 'oabc'), 1), 'rowValue2')])
> 02-02                      Scan(groupscan=[EasyGroupScan [selectionRoot=/drill/testdata/complex_type/json/complex.json, numFiles=1, columns=[`id`, `oooa`.`oa`.`oab`.`oabc`, `oooa`.`oa`.`oab`.`oabc`[1].`rowValue2`], files=[maprfs:/drill/testdata/complex_type/json/complex.json]]])
>  | {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)