You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@drill.apache.org by "kunal (JIRA)" <ji...@apache.org> on 2015/07/21 15:51:05 UTC

[jira] [Created] (DRILL-3530) Query : Regarding drill jdbc with big csv file.

kunal created DRILL-3530:
----------------------------

             Summary: Query : Regarding drill jdbc with big csv file.
                 Key: DRILL-3530
                 URL: https://issues.apache.org/jira/browse/DRILL-3530
             Project: Apache Drill
          Issue Type: Bug
          Components: Client - JDBC
    Affects Versions: 1.1.0
         Environment: MapR-Sandbox-For-Apache-Drill-1.0.0-4.1.0.ova
OR
Centos 6 , RAM 16 GB, HHD 1TB
            Reporter: kunal
            Assignee: Daniel Barclay (Drill)
            Priority: Blocker


I am using "MapR-Sandbox-For-Apache-Drill-1.0.0-4.1.0.ova" and copy the 10GB CSV(220 columns and 5 millions rows) file in /mapr/demo.mapr.com/data.
 I am running my application on Apache tomcat server and connecting it through JDBC to DrillBit. I am getting error that says OutOfMemory(Stack trace below)
 Also i see a lot of logs are being written to tomcat console eg. json object etc. I would like to remove that from console. Is there any configuration option available for this??
 If yes, how ??
 
con = DriverManager.getConnection("jdbc:drill:drillbit=localhost:31010","admin","admin");

or


con = DriverManager.getConnection("jdbc:drill:zk=localhost:2181");

Query :

select 
columns[0] col1,columns[1] col2,columns[2] col3,columns[3] col4,columns[4] col5,columns[5] col6,columns[6] col7,columns[7] col8,columns[8] col9,columns[9] col10,columns[10] col11,columns[11] col12,columns[12] col13,columns[13] col14,columns[14] col15,columns[15] col16,columns[16] col17,columns[17] col18,columns[18] col19,columns[19] col20,columns[20] col21,columns[21] col22,columns[22] col23,columns[23] col24,columns[24] col25,columns[25] col26,columns[26] col27,columns[27] col28,columns[28] col29,columns[29] col30,columns[30] col31,columns[31] col32,columns[32] col33,columns[33] col34,columns[34] col35,columns[35] col36,columns[36] col37,columns[37] col38,columns[38] col39,columns[39] col40,columns[40] col41,columns[41] col42,columns[42] col43,columns[43] col44,columns[44] col45,columns[45] col46,columns[46] col47,columns[47] col48,columns[48] col49,columns[49] col50,columns[50] col51,columns[51] col52,columns[52] col53,columns[53] col54,columns[54] col55,columns[55] col56,columns[56] col57,columns[57] col58,columns[58] col59,columns[59] col60,columns[60] col61,columns[61] col62,columns[62] col63,columns[63] col64,columns[64] col65,columns[65] col66,columns[66] col67,columns[67] col68,columns[68] col69,columns[69] col70,columns[70] col71,columns[71] col72,columns[72] col73,columns[73] col74,columns[74] col75,columns[75] col76,columns[76] col77,columns[77] col78,columns[78] col79,columns[79] col80,columns[80] col81,columns[81] col82,columns[82] col83,columns[83] col84,columns[84] col85,columns[85] col86,columns[86] col87,columns[87] col88,columns[88] col89,columns[89] col90,columns[90] col91,columns[91] col92,columns[92] col93,columns[93] col94,columns[94] col95,columns[95] col96,columns[96] col97,columns[97] col98,columns[98] col99,columns[99] col100,columns[100] col101,columns[101] col102,columns[102] col103,columns[103] col104,columns[104] col105,columns[105] col106,columns[106] col107,columns[107] col108,columns[108] col109,columns[109] col110,columns[110] col111,columns[111] col112,columns[112] col113,columns[113] col114,columns[114] col115,columns[115] col116,columns[116] col117,columns[117] col118,columns[118] col119,columns[119] col120,columns[120] col121,columns[121] col122,columns[122] col123,columns[123] col124,columns[124] col125,columns[125] col126,columns[126] col127,columns[127] col128,columns[128] col129,columns[129] col130,columns[130] col131,columns[131] col132,columns[132] col133,columns[133] col134,columns[134] col135,columns[135] col136,columns[136] col137,columns[137] col138,columns[138] col139,columns[139] col140,columns[140] col141,columns[141] col142,columns[142] col143,columns[143] col144,columns[144] col145,columns[145] col146,columns[146] col147,columns[147] col148,columns[148] col149,columns[149] col150,columns[150] col151,columns[151] col152,columns[152] col153,columns[153] col154,columns[154] col155,columns[155] col156,columns[156] col157,columns[157] col158,columns[158] col159,columns[159] col160,columns[160] col161,columns[161] col162,columns[162] col163,columns[163] col164,columns[164] col165,columns[165] col166,columns[166] col167,columns[167] col168,columns[168] col169,columns[169] col170,columns[170] col171,columns[171] col172,columns[172] col173,columns[173] col174,columns[174] col175,columns[175] col176,columns[176] col177,columns[177] col178,columns[178] col179,columns[179] col180,columns[180] col181,columns[181] col182,columns[182] col183,columns[183] col184,columns[184] col185,columns[185] col186,columns[186] col187,columns[187] col188,columns[188] col189,columns[189] col190,columns[190] col191,columns[191] col192,columns[192] col193,columns[193] col194,columns[194] col195,columns[195] col196,columns[196] col197,columns[197] col198,columns[198] col199,columns[199] col200,columns[200] col201,columns[201] col202,columns[202] col203,columns[203] col204,columns[204] col205,columns[205] col206,columns[206] col207,columns[207] col208,columns[208] col209,columns[209] col210,columns[210] col211,columns[211] col212,columns[212] col213,columns[213] col214,columns[214] col215,columns[215] col216,columns[216] col217,columns[217] col218,columns[218] col219,columns[219] col220
from dfs.root.`SampleData220_Cols.csv`

 

io.netty.handler.codec.DecoderException: java.lang.OutOfMemoryError: Direct buffer memory
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:233) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [drill-jdbc-all-1.1.0.jar:na]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) [drill-jdbc-all-1.1.0.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:658) ~[na:1.7.0_45]
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[na:1.7.0_45]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:306) ~[na:1.7.0_45]
at io.netty.buffer.PoolArena$DirectArena.newUnpooledChunk(PoolArena.java:443) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.PoolArena.allocateHuge(PoolArena.java:187) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.PoolArena.allocate(PoolArena.java:165) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.PoolArena.reallocate(PoolArena.java:280) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.PooledByteBuf.capacity(PooledByteBuf.java:110) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.AbstractByteBuf.ensureWritable(AbstractByteBuf.java:251) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:849) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:841) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:831) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.WrappedByteBuf.writeBytes(WrappedByteBuf.java:600) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.buffer.UnsafeDirectLittleEndian.writeBytes(UnsafeDirectLittleEndian.java:28) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.handler.codec.ByteToMessageDecoder$1.cumulate(ByteToMessageDecoder.java:92) ~[drill-jdbc-all-1.1.0.jar:na]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:227) ~[drill-jdbc-all-1.1.0.jar:na]
... 13 common frames omitted



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)