You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hive.apache.org by Sanjeev Verma <sa...@gmail.com> on 2015/08/19 18:26:19 UTC

hiveserver2 hangs

Hi
We are experiencing a strange problem with the hiveserver2, in one of the
job it gets the GC limit exceed from mapred task and hangs even having
enough heap available.we are not able to identify what causing this issue.
Could anybody help me identify the issue and let me know what pointers I
need to looked up.

Thanks

regarding hive classloader

Posted by Wangwenli <wa...@huawei.com>.
Hi guys,

recently we met one exception : stream closed , the detail is here, https://issues.apache.org/jira/browse/HIVE-11681

the root cause is the add jar will generate one new classloader, when session close, the classloader will close also , which inside the URLClassloader, the stream cached by this classloader also closed, wich cause the other thread met the stream closed exception.

we thought this is one common issue for all hive user from hive 0.13.1 which merge this issue [Bug - A problem which impairs or prevents the functions of the product.]  HIVE-3969<https://issues.apache.org/jira/browse/HIVE-3969> Session state for hive server should be cleaned-up

it is a  little difficult to resolve this issue. because all the invoke is outside of hive controle, like cache stream ,close stream.

so my doubts is why hiveserver should create new URLClassLoader for each ession , but not reuse one same URLClassloader.

expect yours guidance, thks!

Regards
Wenli

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
Thanks sanjeev for your help.

BTW I try to increase the Heap Size of HS2 but seeing the same
exception.from where this exception has originated, it looks like
originated from the thrift client.any idea what operation it is doing with
the given stack.

Local Variable: org.apache.thrift.TByteArrayOutputStream#42
Local Variable: byte[]#5378
at org.apache.thrift.transport.TSaslTransport.write(TSaslTransport.java:446)
at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
ServerTransport.java:41)
at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
rotocol.java:163)
at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
ryProtocol.java:186)
Local Variable: byte[]#2
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:490)
Local Variable: java.util.ArrayList$Itr#1
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:433)
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme#1
at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
ngColumn.java:371)
at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
teValue(TColumn.java:381)
Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
at org.apache.thrift.TUnion.write(TUnion.java:152)

On Wed, Sep 9, 2015 at 8:19 AM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> Sanjeev,
>
> I am going off this exception in the stacktrace that you posted.
>
> "at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)"
>
> which def. indicates that it's not very happy memory wise. I would def.
> recommend to bump up the memory and see if it helps. If not, we can debug
> further from there.
>
> On Tue, Sep 8, 2015 at 12:17 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> What this exception implies here? how to identify the problem here.
>> Thanks
>>
>> On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> We have 8GB HS2 java heap, we have not tried any bumping.
>>>
>>> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
>>> kulkarni.swarnim@gmail.com> wrote:
>>>
>>>> How much memory have you currently provided to HS2? Have you tried
>>>> bumping that up?
>>>>
>>>> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> *I am getting the following exception when the HS2 is crashing, any
>>>>> idea why it has happening*
>>>>>
>>>>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>>>>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>>>>> at java.util.Arrays.copyOf(Arrays.java:2271)
>>>>> Local Variable: byte[]#1
>>>>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>>>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>>>>> Stream.java:93)
>>>>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>>>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>>>>> Local Variable: byte[]#5378
>>>>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>>>>> ort.java:446)
>>>>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>>>>> ServerTransport.java:41)
>>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>>>>> rotocol.java:163)
>>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>>>>> ryProtocol.java:186)
>>>>> Local Variable: byte[]#2
>>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>>> mnStandardScheme.write(TStringColumn.java:490)
>>>>> Local Variable: java.util.ArrayList$Itr#1
>>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>>> mnStandardScheme.write(TStringColumn.java:433)
>>>>> Local Variable: org.apache.hive.service.cli.th
>>>>> rift.TStringColumn$TStringColumnStandardScheme#1
>>>>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>>>>> ngColumn.java:371)
>>>>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>>>>> teValue(TColumn.java:381)
>>>>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>>>>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.j
>>>>> ava:244)
>>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.j
>>>>> ava:213)
>>>>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>
>>>>>> Sanjeev,
>>>>>>
>>>>>> One possibility is that you are running into[1] which affects hive
>>>>>> 0.13. Is it possible for you to apply the patch on [1] and see if it fixes
>>>>>> your problem?
>>>>>>
>>>>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>>>>
>>>>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>
>>>>>>> We are using hive-0.13 with hadoop1.
>>>>>>>
>>>>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>>>
>>>>>>>> Sanjeev,
>>>>>>>>
>>>>>>>> Can you tell me more details about your hive version/hadoop version
>>>>>>>> etc.
>>>>>>>>
>>>>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>>>>
>>>>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi
>>>>>>>>>> We are experiencing a strange problem with the hiveserver2, in
>>>>>>>>>> one of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>>>>> issue.
>>>>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>>>>> pointers I need to looked up.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Swarnim
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Swarnim
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
Thanks sanjeev for your help.

BTW I try to increase the Heap Size of HS2 but seeing the same
exception.from where this exception has originated, it looks like
originated from the thrift client.any idea what operation it is doing with
the given stack.

Local Variable: org.apache.thrift.TByteArrayOutputStream#42
Local Variable: byte[]#5378
at org.apache.thrift.transport.TSaslTransport.write(TSaslTransport.java:446)
at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
ServerTransport.java:41)
at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
rotocol.java:163)
at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
ryProtocol.java:186)
Local Variable: byte[]#2
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:490)
Local Variable: java.util.ArrayList$Itr#1
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:433)
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme#1
at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
ngColumn.java:371)
at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
teValue(TColumn.java:381)
Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
at org.apache.thrift.TUnion.write(TUnion.java:152)

On Wed, Sep 9, 2015 at 8:19 AM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> Sanjeev,
>
> I am going off this exception in the stacktrace that you posted.
>
> "at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)"
>
> which def. indicates that it's not very happy memory wise. I would def.
> recommend to bump up the memory and see if it helps. If not, we can debug
> further from there.
>
> On Tue, Sep 8, 2015 at 12:17 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> What this exception implies here? how to identify the problem here.
>> Thanks
>>
>> On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> We have 8GB HS2 java heap, we have not tried any bumping.
>>>
>>> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
>>> kulkarni.swarnim@gmail.com> wrote:
>>>
>>>> How much memory have you currently provided to HS2? Have you tried
>>>> bumping that up?
>>>>
>>>> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> *I am getting the following exception when the HS2 is crashing, any
>>>>> idea why it has happening*
>>>>>
>>>>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>>>>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>>>>> at java.util.Arrays.copyOf(Arrays.java:2271)
>>>>> Local Variable: byte[]#1
>>>>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>>>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>>>>> Stream.java:93)
>>>>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>>>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>>>>> Local Variable: byte[]#5378
>>>>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>>>>> ort.java:446)
>>>>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>>>>> ServerTransport.java:41)
>>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>>>>> rotocol.java:163)
>>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>>>>> ryProtocol.java:186)
>>>>> Local Variable: byte[]#2
>>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>>> mnStandardScheme.write(TStringColumn.java:490)
>>>>> Local Variable: java.util.ArrayList$Itr#1
>>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>>> mnStandardScheme.write(TStringColumn.java:433)
>>>>> Local Variable: org.apache.hive.service.cli.th
>>>>> rift.TStringColumn$TStringColumnStandardScheme#1
>>>>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>>>>> ngColumn.java:371)
>>>>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>>>>> teValue(TColumn.java:381)
>>>>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>>>>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.j
>>>>> ava:244)
>>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.j
>>>>> ava:213)
>>>>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>>>>
>>>>>
>>>>>
>>>>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>
>>>>>> Sanjeev,
>>>>>>
>>>>>> One possibility is that you are running into[1] which affects hive
>>>>>> 0.13. Is it possible for you to apply the patch on [1] and see if it fixes
>>>>>> your problem?
>>>>>>
>>>>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>>>>
>>>>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>
>>>>>>> We are using hive-0.13 with hadoop1.
>>>>>>>
>>>>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>>>
>>>>>>>> Sanjeev,
>>>>>>>>
>>>>>>>> Can you tell me more details about your hive version/hadoop version
>>>>>>>> etc.
>>>>>>>>
>>>>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>>>>
>>>>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi
>>>>>>>>>> We are experiencing a strange problem with the hiveserver2, in
>>>>>>>>>> one of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>>>>> issue.
>>>>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>>>>> pointers I need to looked up.
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Swarnim
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Swarnim
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
Sanjeev,

I am going off this exception in the stacktrace that you posted.

"at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)"

which def. indicates that it's not very happy memory wise. I would def.
recommend to bump up the memory and see if it helps. If not, we can debug
further from there.

On Tue, Sep 8, 2015 at 12:17 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> What this exception implies here? how to identify the problem here.
> Thanks
>
> On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> We have 8GB HS2 java heap, we have not tried any bumping.
>>
>> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
>> kulkarni.swarnim@gmail.com> wrote:
>>
>>> How much memory have you currently provided to HS2? Have you tried
>>> bumping that up?
>>>
>>> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sanjeev.verma82@gmail.com
>>> > wrote:
>>>
>>>> *I am getting the following exception when the HS2 is crashing, any
>>>> idea why it has happening*
>>>>
>>>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>>>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>>>> at java.util.Arrays.copyOf(Arrays.java:2271)
>>>> Local Variable: byte[]#1
>>>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>>>> Stream.java:93)
>>>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>>>> Local Variable: byte[]#5378
>>>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>>>> ort.java:446)
>>>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>>>> ServerTransport.java:41)
>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>>>> rotocol.java:163)
>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>>>> ryProtocol.java:186)
>>>> Local Variable: byte[]#2
>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>> mnStandardScheme.write(TStringColumn.java:490)
>>>> Local Variable: java.util.ArrayList$Itr#1
>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>> mnStandardScheme.write(TStringColumn.java:433)
>>>> Local Variable: org.apache.hive.service.cli.th
>>>> rift.TStringColumn$TStringColumnStandardScheme#1
>>>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>>>> ngColumn.java:371)
>>>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>>>> teValue(TColumn.java:381)
>>>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>>>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
>>>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>>>
>>>>
>>>>
>>>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>
>>>>> Sanjeev,
>>>>>
>>>>> One possibility is that you are running into[1] which affects hive
>>>>> 0.13. Is it possible for you to apply the patch on [1] and see if it fixes
>>>>> your problem?
>>>>>
>>>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>>>
>>>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>
>>>>>> We are using hive-0.13 with hadoop1.
>>>>>>
>>>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>>
>>>>>>> Sanjeev,
>>>>>>>
>>>>>>> Can you tell me more details about your hive version/hadoop version
>>>>>>> etc.
>>>>>>>
>>>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>
>>>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>>>
>>>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>> We are experiencing a strange problem with the hiveserver2, in one
>>>>>>>>> of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>>>> issue.
>>>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>>>> pointers I need to looked up.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Swarnim
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Swarnim
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Swarnim
>>>
>>
>>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
Sanjeev,

I am going off this exception in the stacktrace that you posted.

"at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)"

which def. indicates that it's not very happy memory wise. I would def.
recommend to bump up the memory and see if it helps. If not, we can debug
further from there.

On Tue, Sep 8, 2015 at 12:17 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> What this exception implies here? how to identify the problem here.
> Thanks
>
> On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> We have 8GB HS2 java heap, we have not tried any bumping.
>>
>> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
>> kulkarni.swarnim@gmail.com> wrote:
>>
>>> How much memory have you currently provided to HS2? Have you tried
>>> bumping that up?
>>>
>>> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sanjeev.verma82@gmail.com
>>> > wrote:
>>>
>>>> *I am getting the following exception when the HS2 is crashing, any
>>>> idea why it has happening*
>>>>
>>>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>>>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>>>> at java.util.Arrays.copyOf(Arrays.java:2271)
>>>> Local Variable: byte[]#1
>>>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>>>> Stream.java:93)
>>>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>>>> Local Variable: byte[]#5378
>>>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>>>> ort.java:446)
>>>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>>>> ServerTransport.java:41)
>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>>>> rotocol.java:163)
>>>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>>>> ryProtocol.java:186)
>>>> Local Variable: byte[]#2
>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>> mnStandardScheme.write(TStringColumn.java:490)
>>>> Local Variable: java.util.ArrayList$Itr#1
>>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>>> mnStandardScheme.write(TStringColumn.java:433)
>>>> Local Variable: org.apache.hive.service.cli.th
>>>> rift.TStringColumn$TStringColumnStandardScheme#1
>>>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>>>> ngColumn.java:371)
>>>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>>>> teValue(TColumn.java:381)
>>>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>>>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
>>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
>>>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>>>
>>>>
>>>>
>>>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>
>>>>> Sanjeev,
>>>>>
>>>>> One possibility is that you are running into[1] which affects hive
>>>>> 0.13. Is it possible for you to apply the patch on [1] and see if it fixes
>>>>> your problem?
>>>>>
>>>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>>>
>>>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>
>>>>>> We are using hive-0.13 with hadoop1.
>>>>>>
>>>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>>
>>>>>>> Sanjeev,
>>>>>>>
>>>>>>> Can you tell me more details about your hive version/hadoop version
>>>>>>> etc.
>>>>>>>
>>>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>
>>>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>>>
>>>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi
>>>>>>>>> We are experiencing a strange problem with the hiveserver2, in one
>>>>>>>>> of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>>>> issue.
>>>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>>>> pointers I need to looked up.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Swarnim
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Swarnim
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Swarnim
>>>
>>
>>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
What this exception implies here? how to identify the problem here.
Thanks

On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> We have 8GB HS2 java heap, we have not tried any bumping.
>
> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
> kulkarni.swarnim@gmail.com> wrote:
>
>> How much memory have you currently provided to HS2? Have you tried
>> bumping that up?
>>
>> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sa...@gmail.com>
>> wrote:
>>
>>> *I am getting the following exception when the HS2 is crashing, any idea
>>> why it has happening*
>>>
>>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>>> at java.util.Arrays.copyOf(Arrays.java:2271)
>>> Local Variable: byte[]#1
>>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>>> Stream.java:93)
>>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>>> Local Variable: byte[]#5378
>>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>>> ort.java:446)
>>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>>> ServerTransport.java:41)
>>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>>> rotocol.java:163)
>>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>>> ryProtocol.java:186)
>>> Local Variable: byte[]#2
>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>> mnStandardScheme.write(TStringColumn.java:490)
>>> Local Variable: java.util.ArrayList$Itr#1
>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>> mnStandardScheme.write(TStringColumn.java:433)
>>> Local Variable: org.apache.hive.service.cli.th
>>> rift.TStringColumn$TStringColumnStandardScheme#1
>>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>>> ngColumn.java:371)
>>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>>> teValue(TColumn.java:381)
>>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
>>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>>
>>>
>>>
>>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>>> kulkarni.swarnim@gmail.com> wrote:
>>>
>>>> Sanjeev,
>>>>
>>>> One possibility is that you are running into[1] which affects hive
>>>> 0.13. Is it possible for you to apply the patch on [1] and see if it fixes
>>>> your problem?
>>>>
>>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>>
>>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> We are using hive-0.13 with hadoop1.
>>>>>
>>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>
>>>>>> Sanjeev,
>>>>>>
>>>>>> Can you tell me more details about your hive version/hadoop version
>>>>>> etc.
>>>>>>
>>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>
>>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>>
>>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>> We are experiencing a strange problem with the hiveserver2, in one
>>>>>>>> of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>>> issue.
>>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>>> pointers I need to looked up.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Swarnim
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
What this exception implies here? how to identify the problem here.
Thanks

On Tue, Sep 8, 2015 at 10:44 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> We have 8GB HS2 java heap, we have not tried any bumping.
>
> On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
> kulkarni.swarnim@gmail.com> wrote:
>
>> How much memory have you currently provided to HS2? Have you tried
>> bumping that up?
>>
>> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sa...@gmail.com>
>> wrote:
>>
>>> *I am getting the following exception when the HS2 is crashing, any idea
>>> why it has happening*
>>>
>>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>>> at java.util.Arrays.copyOf(Arrays.java:2271)
>>> Local Variable: byte[]#1
>>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>>> Stream.java:93)
>>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>>> Local Variable: byte[]#5378
>>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>>> ort.java:446)
>>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>>> ServerTransport.java:41)
>>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>>> rotocol.java:163)
>>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>>> ryProtocol.java:186)
>>> Local Variable: byte[]#2
>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>> mnStandardScheme.write(TStringColumn.java:490)
>>> Local Variable: java.util.ArrayList$Itr#1
>>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>>> mnStandardScheme.write(TStringColumn.java:433)
>>> Local Variable: org.apache.hive.service.cli.th
>>> rift.TStringColumn$TStringColumnStandardScheme#1
>>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>>> ngColumn.java:371)
>>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>>> teValue(TColumn.java:381)
>>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
>>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
>>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>>
>>>
>>>
>>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>>> kulkarni.swarnim@gmail.com> wrote:
>>>
>>>> Sanjeev,
>>>>
>>>> One possibility is that you are running into[1] which affects hive
>>>> 0.13. Is it possible for you to apply the patch on [1] and see if it fixes
>>>> your problem?
>>>>
>>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>>
>>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> We are using hive-0.13 with hadoop1.
>>>>>
>>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>>
>>>>>> Sanjeev,
>>>>>>
>>>>>> Can you tell me more details about your hive version/hadoop version
>>>>>> etc.
>>>>>>
>>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>
>>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>>
>>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi
>>>>>>>> We are experiencing a strange problem with the hiveserver2, in one
>>>>>>>> of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>>> issue.
>>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>>> pointers I need to looked up.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Swarnim
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
We have 8GB HS2 java heap, we have not tried any bumping.

On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> How much memory have you currently provided to HS2? Have you tried bumping
> that up?
>
> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> *I am getting the following exception when the HS2 is crashing, any idea
>> why it has happening*
>>
>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>> at java.util.Arrays.copyOf(Arrays.java:2271)
>> Local Variable: byte[]#1
>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>> Stream.java:93)
>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>> Local Variable: byte[]#5378
>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>> ort.java:446)
>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>> ServerTransport.java:41)
>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>> rotocol.java:163)
>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>> ryProtocol.java:186)
>> Local Variable: byte[]#2
>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>> mnStandardScheme.write(TStringColumn.java:490)
>> Local Variable: java.util.ArrayList$Itr#1
>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>> mnStandardScheme.write(TStringColumn.java:433)
>> Local Variable: org.apache.hive.service.cli.th
>> rift.TStringColumn$TStringColumnStandardScheme#1
>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>> ngColumn.java:371)
>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>> teValue(TColumn.java:381)
>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>
>>
>>
>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>> kulkarni.swarnim@gmail.com> wrote:
>>
>>> Sanjeev,
>>>
>>> One possibility is that you are running into[1] which affects hive 0.13.
>>> Is it possible for you to apply the patch on [1] and see if it fixes your
>>> problem?
>>>
>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>
>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>> sanjeev.verma82@gmail.com> wrote:
>>>
>>>> We are using hive-0.13 with hadoop1.
>>>>
>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>
>>>>> Sanjeev,
>>>>>
>>>>> Can you tell me more details about your hive version/hadoop version
>>>>> etc.
>>>>>
>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>
>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>
>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>
>>>>>>> Hi
>>>>>>> We are experiencing a strange problem with the hiveserver2, in one
>>>>>>> of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>> issue.
>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>> pointers I need to looked up.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Swarnim
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Swarnim
>>>
>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
We have 8GB HS2 java heap, we have not tried any bumping.

On Tue, Sep 8, 2015 at 8:14 PM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> How much memory have you currently provided to HS2? Have you tried bumping
> that up?
>
> On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> *I am getting the following exception when the HS2 is crashing, any idea
>> why it has happening*
>>
>> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
>> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
>> at java.util.Arrays.copyOf(Arrays.java:2271)
>> Local Variable: byte[]#1
>> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
>> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
>> Stream.java:93)
>> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
>> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
>> Local Variable: byte[]#5378
>> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
>> ort.java:446)
>> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
>> ServerTransport.java:41)
>> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
>> rotocol.java:163)
>> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
>> ryProtocol.java:186)
>> Local Variable: byte[]#2
>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>> mnStandardScheme.write(TStringColumn.java:490)
>> Local Variable: java.util.ArrayList$Itr#1
>> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
>> mnStandardScheme.write(TStringColumn.java:433)
>> Local Variable: org.apache.hive.service.cli.th
>> rift.TStringColumn$TStringColumnStandardScheme#1
>> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
>> ngColumn.java:371)
>> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
>> teValue(TColumn.java:381)
>> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
>> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
>> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
>> at org.apache.thrift.TUnion.write(TUnion.java:152)
>>
>>
>>
>> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
>> kulkarni.swarnim@gmail.com> wrote:
>>
>>> Sanjeev,
>>>
>>> One possibility is that you are running into[1] which affects hive 0.13.
>>> Is it possible for you to apply the patch on [1] and see if it fixes your
>>> problem?
>>>
>>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>>
>>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <
>>> sanjeev.verma82@gmail.com> wrote:
>>>
>>>> We are using hive-0.13 with hadoop1.
>>>>
>>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>>> kulkarni.swarnim@gmail.com> wrote:
>>>>
>>>>> Sanjeev,
>>>>>
>>>>> Can you tell me more details about your hive version/hadoop version
>>>>> etc.
>>>>>
>>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>
>>>>>> Can somebody gives me some pointer to looked upon?
>>>>>>
>>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>>
>>>>>>> Hi
>>>>>>> We are experiencing a strange problem with the hiveserver2, in one
>>>>>>> of the job it gets the GC limit exceed from mapred task and hangs even
>>>>>>> having enough heap available.we are not able to identify what causing this
>>>>>>> issue.
>>>>>>> Could anybody help me identify the issue and let me know what
>>>>>>> pointers I need to looked up.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Swarnim
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Swarnim
>>>
>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
How much memory have you currently provided to HS2? Have you tried bumping
that up?

On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sa...@gmail.com>
wrote:

> *I am getting the following exception when the HS2 is crashing, any idea
> why it has happening*
>
> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
> at java.util.Arrays.copyOf(Arrays.java:2271)
> Local Variable: byte[]#1
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
> Stream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
> Local Variable: byte[]#5378
> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
> ort.java:446)
> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
> ServerTransport.java:41)
> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
> rotocol.java:163)
> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
> ryProtocol.java:186)
> Local Variable: byte[]#2
> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
> mnStandardScheme.write(TStringColumn.java:490)
> Local Variable: java.util.ArrayList$Itr#1
> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
> mnStandardScheme.write(TStringColumn.java:433)
> Local Variable: org.apache.hive.service.cli.th
> rift.TStringColumn$TStringColumnStandardScheme#1
> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
> ngColumn.java:371)
> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
> teValue(TColumn.java:381)
> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
> at org.apache.thrift.TUnion.write(TUnion.java:152)
>
>
>
> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
> kulkarni.swarnim@gmail.com> wrote:
>
>> Sanjeev,
>>
>> One possibility is that you are running into[1] which affects hive 0.13.
>> Is it possible for you to apply the patch on [1] and see if it fixes your
>> problem?
>>
>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>
>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> We are using hive-0.13 with hadoop1.
>>>
>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>> kulkarni.swarnim@gmail.com> wrote:
>>>
>>>> Sanjeev,
>>>>
>>>> Can you tell me more details about your hive version/hadoop version etc.
>>>>
>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> Can somebody gives me some pointer to looked upon?
>>>>>
>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>>>>> enough heap available.we are not able to identify what causing this issue.
>>>>>> Could anybody help me identify the issue and let me know what
>>>>>> pointers I need to looked up.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
How much memory have you currently provided to HS2? Have you tried bumping
that up?

On Mon, Sep 7, 2015 at 1:09 AM, Sanjeev Verma <sa...@gmail.com>
wrote:

> *I am getting the following exception when the HS2 is crashing, any idea
> why it has happening*
>
> "pool-1-thread-121" prio=4 tid=19283 RUNNABLE
> at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
> at java.util.Arrays.copyOf(Arrays.java:2271)
> Local Variable: byte[]#1
> at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
> at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
> Stream.java:93)
> at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
> Local Variable: org.apache.thrift.TByteArrayOutputStream#42
> Local Variable: byte[]#5378
> at org.apache.thrift.transport.TSaslTransport.write(TSaslTransp
> ort.java:446)
> at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
> ServerTransport.java:41)
> at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
> rotocol.java:163)
> at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
> ryProtocol.java:186)
> Local Variable: byte[]#2
> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
> mnStandardScheme.write(TStringColumn.java:490)
> Local Variable: java.util.ArrayList$Itr#1
> at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
> mnStandardScheme.write(TStringColumn.java:433)
> Local Variable: org.apache.hive.service.cli.th
> rift.TStringColumn$TStringColumnStandardScheme#1
> at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
> ngColumn.java:371)
> at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
> teValue(TColumn.java:381)
> Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
> Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
> at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
> at org.apache.thrift.TUnion.write(TUnion.java:152)
>
>
>
> On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
> kulkarni.swarnim@gmail.com> wrote:
>
>> Sanjeev,
>>
>> One possibility is that you are running into[1] which affects hive 0.13.
>> Is it possible for you to apply the patch on [1] and see if it fixes your
>> problem?
>>
>> [1] https://issues.apache.org/jira/browse/HIVE-10410
>>
>> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> We are using hive-0.13 with hadoop1.
>>>
>>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>>> kulkarni.swarnim@gmail.com> wrote:
>>>
>>>> Sanjeev,
>>>>
>>>> Can you tell me more details about your hive version/hadoop version etc.
>>>>
>>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> Can somebody gives me some pointer to looked upon?
>>>>>
>>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>>> sanjeev.verma82@gmail.com> wrote:
>>>>>
>>>>>> Hi
>>>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>>>>> enough heap available.we are not able to identify what causing this issue.
>>>>>> Could anybody help me identify the issue and let me know what
>>>>>> pointers I need to looked up.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Swarnim
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>


-- 
Swarnim

regarding hive classloader

Posted by Wangwenli <wa...@huawei.com>.
Hi guys,

recently we met one exception : stream closed , the detail is here, https://issues.apache.org/jira/browse/HIVE-11681

the root cause is the add jar will generate one new classloader, when session close, the classloader will close also , which inside the URLClassloader, the stream cached by this classloader also closed, wich cause the other thread met the stream closed exception.

we thought this is one common issue for all hive user from hive 0.13.1 which merge this issue [Bug - A problem which impairs or prevents the functions of the product.]  HIVE-3969<https://issues.apache.org/jira/browse/HIVE-3969> Session state for hive server should be cleaned-up

it is a  little difficult to resolve this issue. because all the invoke is outside of hive controle, like cache stream ,close stream.

so my doubts is why hiveserver should create new URLClassLoader for each ession , but not reuse one same URLClassloader.

expect yours guidance, thks!

Regards
Wenli

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
*I am getting the following exception when the HS2 is crashing, any idea
why it has happening*

"pool-1-thread-121" prio=4 tid=19283 RUNNABLE
at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
at java.util.Arrays.copyOf(Arrays.java:2271)
Local Variable: byte[]#1
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
Stream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
Local Variable: org.apache.thrift.TByteArrayOutputStream#42
Local Variable: byte[]#5378
at org.apache.thrift.transport.TSaslTransport.write(TSaslTransport.java:446)
at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
ServerTransport.java:41)
at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
rotocol.java:163)
at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
ryProtocol.java:186)
Local Variable: byte[]#2
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:490)
Local Variable: java.util.ArrayList$Itr#1
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:433)
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme#1
at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
ngColumn.java:371)
at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
teValue(TColumn.java:381)
Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
at org.apache.thrift.TUnion.write(TUnion.java:152)



On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> Sanjeev,
>
> One possibility is that you are running into[1] which affects hive 0.13.
> Is it possible for you to apply the patch on [1] and see if it fixes your
> problem?
>
> [1] https://issues.apache.org/jira/browse/HIVE-10410
>
> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> We are using hive-0.13 with hadoop1.
>>
>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>> kulkarni.swarnim@gmail.com> wrote:
>>
>>> Sanjeev,
>>>
>>> Can you tell me more details about your hive version/hadoop version etc.
>>>
>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>> sanjeev.verma82@gmail.com> wrote:
>>>
>>>> Can somebody gives me some pointer to looked upon?
>>>>
>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> Hi
>>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>>>> enough heap available.we are not able to identify what causing this issue.
>>>>> Could anybody help me identify the issue and let me know what pointers
>>>>> I need to looked up.
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Swarnim
>>>
>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
*I am getting the following exception when the HS2 is crashing, any idea
why it has happening*

"pool-1-thread-121" prio=4 tid=19283 RUNNABLE
at java.lang.OutOfMemoryError.<init>(OutOfMemoryError.java:48)
at java.util.Arrays.copyOf(Arrays.java:2271)
Local Variable: byte[]#1
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutput
Stream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
Local Variable: org.apache.thrift.TByteArrayOutputStream#42
Local Variable: byte[]#5378
at org.apache.thrift.transport.TSaslTransport.write(TSaslTransport.java:446)
at org.apache.thrift.transport.TSaslServerTransport.write(TSasl
ServerTransport.java:41)
at org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryP
rotocol.java:163)
at org.apache.thrift.protocol.TBinaryProtocol.writeString(TBina
ryProtocol.java:186)
Local Variable: byte[]#2
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:490)
Local Variable: java.util.ArrayList$Itr#1
at org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme.write(TStringColumn.java:433)
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn$TStringColu
mnStandardScheme#1
at org.apache.hive.service.cli.thrift.TStringColumn.write(TStri
ngColumn.java:371)
at org.apache.hive.service.cli.thrift.TColumn.standardSchemeWri
teValue(TColumn.java:381)
Local Variable: org.apache.hive.service.cli.thrift.TColumn#504
Local Variable: org.apache.hive.service.cli.thrift.TStringColumn#453
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:244)
at org.apache.thrift.TUnion$TUnionStandardScheme.write(TUnion.java:213)
at org.apache.thrift.TUnion.write(TUnion.java:152)



On Fri, Aug 21, 2015 at 6:16 AM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> Sanjeev,
>
> One possibility is that you are running into[1] which affects hive 0.13.
> Is it possible for you to apply the patch on [1] and see if it fixes your
> problem?
>
> [1] https://issues.apache.org/jira/browse/HIVE-10410
>
> On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> We are using hive-0.13 with hadoop1.
>>
>> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
>> kulkarni.swarnim@gmail.com> wrote:
>>
>>> Sanjeev,
>>>
>>> Can you tell me more details about your hive version/hadoop version etc.
>>>
>>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <
>>> sanjeev.verma82@gmail.com> wrote:
>>>
>>>> Can somebody gives me some pointer to looked upon?
>>>>
>>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>>> sanjeev.verma82@gmail.com> wrote:
>>>>
>>>>> Hi
>>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>>>> enough heap available.we are not able to identify what causing this issue.
>>>>> Could anybody help me identify the issue and let me know what pointers
>>>>> I need to looked up.
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Swarnim
>>>
>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
Sanjeev,

One possibility is that you are running into[1] which affects hive 0.13. Is
it possible for you to apply the patch on [1] and see if it fixes your
problem?

[1] https://issues.apache.org/jira/browse/HIVE-10410

On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> We are using hive-0.13 with hadoop1.
>
> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
> kulkarni.swarnim@gmail.com> wrote:
>
>> Sanjeev,
>>
>> Can you tell me more details about your hive version/hadoop version etc.
>>
>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> Can somebody gives me some pointer to looked upon?
>>>
>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>> sanjeev.verma82@gmail.com> wrote:
>>>
>>>> Hi
>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>>> enough heap available.we are not able to identify what causing this issue.
>>>> Could anybody help me identify the issue and let me know what pointers
>>>> I need to looked up.
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
Sanjeev,

One possibility is that you are running into[1] which affects hive 0.13. Is
it possible for you to apply the patch on [1] and see if it fixes your
problem?

[1] https://issues.apache.org/jira/browse/HIVE-10410

On Thu, Aug 20, 2015 at 6:12 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> We are using hive-0.13 with hadoop1.
>
> On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
> kulkarni.swarnim@gmail.com> wrote:
>
>> Sanjeev,
>>
>> Can you tell me more details about your hive version/hadoop version etc.
>>
>> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> Can somebody gives me some pointer to looked upon?
>>>
>>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <
>>> sanjeev.verma82@gmail.com> wrote:
>>>
>>>> Hi
>>>> We are experiencing a strange problem with the hiveserver2, in one of
>>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>>> enough heap available.we are not able to identify what causing this issue.
>>>> Could anybody help me identify the issue and let me know what pointers
>>>> I need to looked up.
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>
>>
>> --
>> Swarnim
>>
>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
We are using hive-0.13 with hadoop1.

On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> Sanjeev,
>
> Can you tell me more details about your hive version/hadoop version etc.
>
> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> Can somebody gives me some pointer to looked upon?
>>
>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> Hi
>>> We are experiencing a strange problem with the hiveserver2, in one of
>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>> enough heap available.we are not able to identify what causing this issue.
>>> Could anybody help me identify the issue and let me know what pointers I
>>> need to looked up.
>>>
>>> Thanks
>>>
>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
We are using hive-0.13 with hadoop1.

On Thu, Aug 20, 2015 at 11:49 AM, kulkarni.swarnim@gmail.com <
kulkarni.swarnim@gmail.com> wrote:

> Sanjeev,
>
> Can you tell me more details about your hive version/hadoop version etc.
>
> On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> Can somebody gives me some pointer to looked upon?
>>
>> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sanjeev.verma82@gmail.com
>> > wrote:
>>
>>> Hi
>>> We are experiencing a strange problem with the hiveserver2, in one of
>>> the job it gets the GC limit exceed from mapred task and hangs even having
>>> enough heap available.we are not able to identify what causing this issue.
>>> Could anybody help me identify the issue and let me know what pointers I
>>> need to looked up.
>>>
>>> Thanks
>>>
>>
>>
>
>
> --
> Swarnim
>

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
Sanjeev,

Can you tell me more details about your hive version/hadoop version etc.

On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> Hi
>> We are experiencing a strange problem with the hiveserver2, in one of the
>> job it gets the GC limit exceed from mapred task and hangs even having
>> enough heap available.we are not able to identify what causing this issue.
>> Could anybody help me identify the issue and let me know what pointers I
>> need to looked up.
>>
>> Thanks
>>
>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by "kulkarni.swarnim@gmail.com" <ku...@gmail.com>.
Sanjeev,

Can you tell me more details about your hive version/hadoop version etc.

On Wed, Aug 19, 2015 at 1:35 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> Hi
>> We are experiencing a strange problem with the hiveserver2, in one of the
>> job it gets the GC limit exceed from mapred task and hangs even having
>> enough heap available.we are not able to identify what causing this issue.
>> Could anybody help me identify the issue and let me know what pointers I
>> need to looked up.
>>
>> Thanks
>>
>
>


-- 
Swarnim

Re: hiveserver2 hangs

Posted by Noam Hasson <no...@kenshoo.com>.
We had a case of retrieving a record which is bigger than the GC limit, for
example a column with Array or Map type that has 1M cells.

On Wed, Aug 19, 2015 at 9:35 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> Hi
>> We are experiencing a strange problem with the hiveserver2, in one of the
>> job it gets the GC limit exceed from mapred task and hangs even having
>> enough heap available.we are not able to identify what causing this issue.
>> Could anybody help me identify the issue and let me know what pointers I
>> need to looked up.
>>
>> Thanks
>>
>
>

-- 
This e-mail, as well as any attached document, may contain material which 
is confidential and privileged and may include trademark, copyright and 
other intellectual property rights that are proprietary to Kenshoo Ltd, 
 its subsidiaries or affiliates ("Kenshoo"). This e-mail and its 
attachments may be read, copied and used only by the addressee for the 
purpose(s) for which it was disclosed herein. If you have received it in 
error, please destroy the message and any attachment, and contact us 
immediately. If you are not the intended recipient, be aware that any 
review, reliance, disclosure, copying, distribution or use of the contents 
of this message without Kenshoo's express permission is strictly prohibited.

Re: hiveserver2 hangs

Posted by Noam Hasson <no...@kenshoo.com>.
We had a case of retrieving a record which is bigger than the GC limit, for
example a column with Array or Map type that has 1M cells.

On Wed, Aug 19, 2015 at 9:35 PM, Sanjeev Verma <sa...@gmail.com>
wrote:

> Can somebody gives me some pointer to looked upon?
>
> On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sa...@gmail.com>
> wrote:
>
>> Hi
>> We are experiencing a strange problem with the hiveserver2, in one of the
>> job it gets the GC limit exceed from mapred task and hangs even having
>> enough heap available.we are not able to identify what causing this issue.
>> Could anybody help me identify the issue and let me know what pointers I
>> need to looked up.
>>
>> Thanks
>>
>
>

-- 
This e-mail, as well as any attached document, may contain material which 
is confidential and privileged and may include trademark, copyright and 
other intellectual property rights that are proprietary to Kenshoo Ltd, 
 its subsidiaries or affiliates ("Kenshoo"). This e-mail and its 
attachments may be read, copied and used only by the addressee for the 
purpose(s) for which it was disclosed herein. If you have received it in 
error, please destroy the message and any attachment, and contact us 
immediately. If you are not the intended recipient, be aware that any 
review, reliance, disclosure, copying, distribution or use of the contents 
of this message without Kenshoo's express permission is strictly prohibited.

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
Can somebody gives me some pointer to looked upon?

On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sa...@gmail.com>
wrote:

> Hi
> We are experiencing a strange problem with the hiveserver2, in one of the
> job it gets the GC limit exceed from mapred task and hangs even having
> enough heap available.we are not able to identify what causing this issue.
> Could anybody help me identify the issue and let me know what pointers I
> need to looked up.
>
> Thanks
>

Re: hiveserver2 hangs

Posted by Sanjeev Verma <sa...@gmail.com>.
Can somebody gives me some pointer to looked upon?

On Wed, Aug 19, 2015 at 9:26 AM, Sanjeev Verma <sa...@gmail.com>
wrote:

> Hi
> We are experiencing a strange problem with the hiveserver2, in one of the
> job it gets the GC limit exceed from mapred task and hangs even having
> enough heap available.we are not able to identify what causing this issue.
> Could anybody help me identify the issue and let me know what pointers I
> need to looked up.
>
> Thanks
>