You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@drill.apache.org by Akif Khan <ak...@innovaccer.com> on 2015/06/17 11:06:33 UTC
Error on runnning a flattening query
Hi all,
I wrote a query mentioned below and got this error, I have an amazon aws
with four nodes having 32 GB RAM and 8 cores on ubuntu with Hadoop FS and
zookeeper installed :
*Query *: select flatten(campaign['funders'])['user_id'] from
`new_crowdfunding`;
the s*tructure of new_crowdfunding table* is as follows:
https://gist.github.com/akifkhan/d864ad9dcf5be712ff24
*Error after running for 10 seconds and printing various user_ids*
java.lang.RuntimeException: java.sql.SQLException: SYSTEM ERROR:
java.lang.IllegalArgumentException: initialCapacity: -2147483648 (expectd:
0+)
Fragment 0:0
[Error Id: 4fa13e31-ad84-42c6-aa50-c80c92ab026d on hadoop-slave1:31010]
at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
at
sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:85)
at sqlline.TableOutputFormat.print(TableOutputFormat.java:116)
at sqlline.SqlLine.print(SqlLine.java:1583)
at sqlline.Commands.execute(Commands.java:852)
at sqlline.Commands.sql(Commands.java:751)
at sqlline.SqlLine.dispatch(SqlLine.java:738)
at sqlline.SqlLine.begin(SqlLine.java:612)
at sqlline.SqlLine.start(SqlLine.java:366)
------------------------------------------------------------------------------------------------------------------------
*Verbose Error is :*
java.lang.RuntimeException: java.sql.SQLException: SYSTEM ERROR:
java.lang.IllegalArgumentException: initialCapacity: -2147483648 (expectd:
0+)
Fragment 0:0
[Error Id: a8a7c613-a9ed-4598-994a-2399cf8e69e4 on hadoop-slave1:31010]
(java.lang.IllegalArgumentException) initialCapacity: -2147483648
(expectd: 0+)
io.netty.buffer.PooledByteBufAllocatorL.validate():182
io.netty.buffer.PooledByteBufAllocatorL.directBuffer():170
org.apache.drill.exec.memory.TopLevelAllocator$ChildAllocator.buffer():258
org.apache.drill.exec.memory.TopLevelAllocator$ChildAllocator.buffer():273
org.apache.drill.exec.vector.VarCharVector.reAlloc():368
org.apache.drill.exec.vector.VarCharVector.copyFromSafe():273
org.apache.drill.exec.vector.NullableVarCharVector.copyFromSafe():313
org.apache.drill.exec.vector.NullableVarCharVector$TransferImpl.copyValueSafe():276
org.apache.drill.exec.vector.complex.RepeatedMapVector$RepeatedMapTransferPair.copyValueSafe():355
org.apache.drill.exec.vector.complex.MapVector$MapTransferPair.copyValueSafe():208
org.apache.drill.exec.vector.complex.MapVector.copyFromSafe():83
org.apache.drill.exec.vector.complex.impl.SingleMapReaderImpl.copyAsValue():97
org.apache.drill.exec.test.generated.FlattenerGen3.doEval():52
org.apache.drill.exec.test.generated.FlattenerGen3.flattenRecords():93
org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.handleRemainder():184
org.apache.drill.exec.physical.impl.flatten.FlattenRecordBatch.innerNext():116
org.apache.drill.exec.record.AbstractRecordBatch.next():146
org.apache.drill.exec.record.AbstractRecordBatch.next():105
org.apache.drill.exec.record.AbstractRecordBatch.next():95
org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext():51
org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext():129
org.apache.drill.exec.record.AbstractRecordBatch.next():146
org.apache.drill.exec.physical.impl.BaseRootExec.next():83
org.apache.drill.exec.physical.impl.ScreenCreator$ScreenRoot.innerNext():80
org.apache.drill.exec.physical.impl.BaseRootExec.next():73
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():259
org.apache.drill.exec.work.fragment.FragmentExecutor$1.run():253
java.security.AccessController.doPrivileged():-2
javax.security.auth.Subject.doAs():415
org.apache.hadoop.security.UserGroupInformation.doAs():1556
org.apache.drill.exec.work.fragment.FragmentExecutor.run():253
org.apache.drill.common.SelfCleaningRunnable.run():38
java.util.concurrent.ThreadPoolExecutor.runWorker():1145
java.util.concurrent.ThreadPoolExecutor$Worker.run():615
java.lang.Thread.run():745
at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
at
sqlline.TableOutputFormat$ResizingRowsProvider.next(TableOutputFormat.java:85)
at sqlline.TableOutputFormat.print(TableOutputFormat.java:116)
at sqlline.SqlLine.print(SqlLine.java:1583)
at sqlline.Commands.execute(Commands.java:852)
at sqlline.Commands.sql(Commands.java:751)
at sqlline.SqlLine.dispatch(SqlLine.java:738)
at sqlline.SqlLine.begin(SqlLine.java:612)
at sqlline.SqlLine.start(SqlLine.java:366)
at sqlline.SqlLine.main(SqlLine.java:259)
--
Regards
Akif Khan