You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Anil <an...@gmail.com> on 2017/06/10 09:36:39 UTC
High heap on ignite client
HI,
I have implemented export feature of ignite data using JDBC Interator
ResultSet rs = statement.executeQuery();
while (rs.next()){
// do operations
}
and fetch size is 200.
when i run export operation twice for 4 L records whole 6B is filled up and
never getting released.
Initially i thought that operations transforting result set to file causing
the memory full. But not.
I just did follwoing and still the memory is growing and not getting
released
while (rs.next()){
// nothing
}
num #instances #bytes class name
----------------------------------------------
1: 55072353 2408335272 [C
2: 54923606 1318166544 java.lang.String
3: 779006 746187792 [B
4: 903548 304746304 [Ljava.lang.Object;
5: 773348 259844928 net.juniper.cs.entity.InstallBase
6: 4745694 113896656 java.lang.Long
7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
8: 773348 30933920
org.apache.ignite.internal.binary.BinaryObjectImpl
9: 895627 21495048 java.util.ArrayList
10: 12427 16517632 [I
Not sure why string objects are getting increased.
Could you please help in understanding the issue ?
Thanks
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Hi Alex,
Thanks.
i have changed the swapiness to avoid sys time > user time. and did test
but no luck.
What do you mean by "apps/containers running on same physical machine" ?
You mean on kube instance ? if yes, yes there are number of
services/containers running on same kube cluster/instance.
Ignite client need high CPU ?
Thanks,
Anil
On 6 July 2017 at 17:39, afedotov <al...@gmail.com> wrote:
> Hi,
>
> I've taken a look at the logs.
> I don't see huge heap consumption but from the GC log for node1 I can see
> that in a couple of GCs real time is greater than user and sys time, as
> well as
> in some cases sys time is higher than the user time. Taking into account
> that you are running kubernetes and probably in a virtual environment, I
> suspect that
> overselling takes place here. Please check if there are other
> apps/containers running on the same physical machine.
>
> Kind regards,
> Alex.
>
> On Thu, Jul 6, 2017 at 7:23 AM, Anil [via Apache Ignite Users] <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=14388&i=0>> wrote:
>
>> Hi Alex,
>>
>> Did you get a chance to look into it ? thanks.
>>
>> Regards,
>> Anil
>>
>>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>> ignite-client-tp13594p14372.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=14388&i=1>
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>
> ------------------------------
> View this message in context: Re: High heap on ignite client
> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14388.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>
Re: High heap on ignite client
Posted by afedotov <al...@gmail.com>.
Hi,
I've taken a look at the logs.
I don't see huge heap consumption but from the GC log for node1 I can see
that in a couple of GCs real time is greater than user and sys time, as
well as
in some cases sys time is higher than the user time. Taking into account
that you are running kubernetes and probably in a virtual environment, I
suspect that
overselling takes place here. Please check if there are other
apps/containers running on the same physical machine.
Kind regards,
Alex.
On Thu, Jul 6, 2017 at 7:23 AM, Anil [via Apache Ignite Users] <
ml+s70518n14372h66@n6.nabble.com> wrote:
> Hi Alex,
>
> Did you get a chance to look into it ? thanks.
>
> Regards,
> Anil
>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-
> tp13594p14372.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1h65@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14388.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Hi Alex,
Did you get a chance to look into it ? thanks.
Regards,
Anil
Re: High heap on ignite client
Posted by afedotov <al...@gmail.com>.
Hi, I'll check and get back with any findings. Thanks.
Kind regards,
Alex.
On Wed, Jun 28, 2017 at 7:38 PM, Anil [via Apache Ignite Users] <
ml+s70518n14146h60@n6.nabble.com> wrote:
> Hi Alex,
>
> The issue is not related to GC all the time. i have attached the another
> gc and server log. GC looks good.
>
> I am running my vertx-hazelcast application (which establishes ignite jdbc
> connection internally) inside kubernetes.
> Do you recommend any configurations for kubernetes ?
>
> Thanks,
> Anil
>
>
> On 28 June 2017 at 14:03, Anil <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14146&i=0>> wrote:
>
>> HI Alex,
>>
>> Your sample program is working. i have compared the configuration and
>> every thing looks good.
>>
>> The only problem i see is something related to amount of data exports.
>> All other services working fine without any issue.
>>
>> I have attached the logs of two export client nodes. please let me know
>> if you see any issue. thanks.
>>
>> Thanks,
>> Anil
>>
>> On 24 June 2017 at 14:46, afedotov <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=14146&i=1>> wrote:
>>
>>> Hi, please provide full logs for all the nodes.
>>> Please add -Xms=1G to the client's VM options.
>>>
>>> Please try the code I attached before with exactly the same config for a
>>> server and a client that was mentioned in the message. If you see that it
>>> works in your environment then compare the config to the one of yours
>>> step-by-step.
>>>
>>> Kind regards,
>>> Alex
>>>
>>>
>>> 23 июня 2017 г. 12:48 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>>> email] <http:///user/SendEmail.jtp?type=node&node=14071&i=0>> написал:
>>>
>>>> Socket closed is the very frequent. May i now which causes the
>>>> following exception ? thanks,
>>>>
>>>> Some more log -
>>>>
>>>>
>>>> 2017-06-23 02:33:34 488 ERROR TcpDiscoverySpi:495 - Failed to send
>>>> message: TcpDiscoveryClientHeartbeatMessage
>>>> [super=TcpDiscoveryAbstractMessage [sndNodeId=null,
>>>> id=a71a444dc51-9956f95a-3bf9-4777-9431-cda0df43ff7d,
>>>> verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
>>>> isClient=true]]
>>>> java.net.SocketException: Socket is closed
>>>> at java.net.Socket.getSendBufferSize(Socket.java:1215)
>>>> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketSt
>>>> ream(TcpDiscoverySpi.java:1254)
>>>> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToS
>>>> ocket(TcpDiscoverySpi.java:1366)
>>>> at org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.
>>>> body(ClientImpl.java:1095)
>>>> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.ja
>>>> va:62)
>>>>
>>>>
>>>> Thanks,
>>>> Anil
>>>>
>>>> On 23 June 2017 at 15:14, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=14058&i=0>> wrote:
>>>>
>>>>> HI Alex,
>>>>>
>>>>> i tried XX:G1NewSizePercent=30 and ignite client is getting restarted
>>>>> very frequently, for each export operation.
>>>>>
>>>>> -Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
>>>>> -XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>> -Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
>>>>>
>>>>> i have attached the gc logs and application logs
>>>>>
>>>>> I am not sure what is causing the ignite client restart for every
>>>>> restart.
>>>>>
>>>>> Do you have any suggestions ? please advice.
>>>>>
>>>>> Thanks,
>>>>> Anil
>>>>>
>>>>>
>>>>> On 21 June 2017 at 09:23, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=14058&i=1>> wrote:
>>>>>
>>>>>> Thanks Alex. I will test it in my local and share the results.
>>>>>>
>>>>>> Did you get a chance to look at the Jdbc driver's next() issue ?
>>>>>> Thanks.
>>>>>>
>>>>>> Thanks,
>>>>>> Anil
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> If you reply to this email, your message will be added to the
>>>> discussion below:
>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>> ignite-client-tp13594p14058.html
>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=14071&i=1>
>>>> To unsubscribe from Apache Ignite Users, click here.
>>>> NAML
>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>
>>>
>>> ------------------------------
>>> View this message in context: Re: High heap on ignite client
>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14071.html>
>>> Sent from the Apache Ignite Users mailing list archive
>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>
>>
>>
>
> *gc-client.log* (15K) Download Attachment
> <http://apache-ignite-users.70518.x6.nabble.com/attachment/14146/0/gc-client.log>
> *server.log* (51K) Download Attachment
> <http://apache-ignite-users.70518.x6.nabble.com/attachment/14146/1/server.log>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-
> tp13594p14146.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1h65@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14176.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Hi Alex,
The issue is not related to GC all the time. i have attached the another gc
and server log. GC looks good.
I am running my vertx-hazelcast application (which establishes ignite jdbc
connection internally) inside kubernetes.
Do you recommend any configurations for kubernetes ?
Thanks,
Anil
On 28 June 2017 at 14:03, Anil <an...@gmail.com> wrote:
> HI Alex,
>
> Your sample program is working. i have compared the configuration and
> every thing looks good.
>
> The only problem i see is something related to amount of data exports. All
> other services working fine without any issue.
>
> I have attached the logs of two export client nodes. please let me know if
> you see any issue. thanks.
>
> Thanks,
> Anil
>
> On 24 June 2017 at 14:46, afedotov <al...@gmail.com> wrote:
>
>> Hi, please provide full logs for all the nodes.
>> Please add -Xms=1G to the client's VM options.
>>
>> Please try the code I attached before with exactly the same config for a
>> server and a client that was mentioned in the message. If you see that it
>> works in your environment then compare the config to the one of yours
>> step-by-step.
>>
>> Kind regards,
>> Alex
>>
>>
>> 23 июня 2017 г. 12:48 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>> email] <http:///user/SendEmail.jtp?type=node&node=14071&i=0>> написал:
>>
>>> Socket closed is the very frequent. May i now which causes the following
>>> exception ? thanks,
>>>
>>> Some more log -
>>>
>>>
>>> 2017-06-23 02:33:34 488 ERROR TcpDiscoverySpi:495 - Failed to send
>>> message: TcpDiscoveryClientHeartbeatMessage
>>> [super=TcpDiscoveryAbstractMessage [sndNodeId=null,
>>> id=a71a444dc51-9956f95a-3bf9-4777-9431-cda0df43ff7d,
>>> verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
>>> isClient=true]]
>>> java.net.SocketException: Socket is closed
>>> at java.net.Socket.getSendBufferSize(Socket.java:1215)
>>> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketSt
>>> ream(TcpDiscoverySpi.java:1254)
>>> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToS
>>> ocket(TcpDiscoverySpi.java:1366)
>>> at org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.
>>> body(ClientImpl.java:1095)
>>> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.ja
>>> va:62)
>>>
>>>
>>> Thanks,
>>> Anil
>>>
>>> On 23 June 2017 at 15:14, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=14058&i=0>> wrote:
>>>
>>>> HI Alex,
>>>>
>>>> i tried XX:G1NewSizePercent=30 and ignite client is getting restarted
>>>> very frequently, for each export operation.
>>>>
>>>> -Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
>>>> -XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>> -Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
>>>>
>>>> i have attached the gc logs and application logs
>>>>
>>>> I am not sure what is causing the ignite client restart for every
>>>> restart.
>>>>
>>>> Do you have any suggestions ? please advice.
>>>>
>>>> Thanks,
>>>> Anil
>>>>
>>>>
>>>> On 21 June 2017 at 09:23, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=14058&i=1>> wrote:
>>>>
>>>>> Thanks Alex. I will test it in my local and share the results.
>>>>>
>>>>> Did you get a chance to look at the Jdbc driver's next() issue ?
>>>>> Thanks.
>>>>>
>>>>> Thanks,
>>>>> Anil
>>>>>
>>>>
>>>>
>>>
>>>
>>> ------------------------------
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>> ignite-client-tp13594p14058.html
>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=14071&i=1>
>>> To unsubscribe from Apache Ignite Users, click here.
>>> NAML
>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>> ------------------------------
>> View this message in context: Re: High heap on ignite client
>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14071.html>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
HI Alex,
Your sample program is working. i have compared the configuration and every
thing looks good.
The only problem i see is something related to amount of data exports. All
other services working fine without any issue.
I have attached the logs of two export client nodes. please let me know if
you see any issue. thanks.
Thanks,
Anil
On 24 June 2017 at 14:46, afedotov <al...@gmail.com> wrote:
> Hi, please provide full logs for all the nodes.
> Please add -Xms=1G to the client's VM options.
>
> Please try the code I attached before with exactly the same config for a
> server and a client that was mentioned in the message. If you see that it
> works in your environment then compare the config to the one of yours
> step-by-step.
>
> Kind regards,
> Alex
>
>
> 23 июня 2017 г. 12:48 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=14071&i=0>> написал:
>
>> Socket closed is the very frequent. May i now which causes the following
>> exception ? thanks,
>>
>> Some more log -
>>
>>
>> 2017-06-23 02:33:34 488 ERROR TcpDiscoverySpi:495 - Failed to send
>> message: TcpDiscoveryClientHeartbeatMessage
>> [super=TcpDiscoveryAbstractMessage [sndNodeId=null,
>> id=a71a444dc51-9956f95a-3bf9-4777-9431-cda0df43ff7d,
>> verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
>> isClient=true]]
>> java.net.SocketException: Socket is closed
>> at java.net.Socket.getSendBufferSize(Socket.java:1215)
>> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketSt
>> ream(TcpDiscoverySpi.java:1254)
>> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToS
>> ocket(TcpDiscoverySpi.java:1366)
>> at org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.
>> body(ClientImpl.java:1095)
>> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.ja
>> va:62)
>>
>>
>> Thanks,
>> Anil
>>
>> On 23 June 2017 at 15:14, Anil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=14058&i=0>> wrote:
>>
>>> HI Alex,
>>>
>>> i tried XX:G1NewSizePercent=30 and ignite client is getting restarted
>>> very frequently, for each export operation.
>>>
>>> -Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
>>> -XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>> -Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
>>>
>>> i have attached the gc logs and application logs
>>>
>>> I am not sure what is causing the ignite client restart for every
>>> restart.
>>>
>>> Do you have any suggestions ? please advice.
>>>
>>> Thanks,
>>> Anil
>>>
>>>
>>> On 21 June 2017 at 09:23, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=14058&i=1>> wrote:
>>>
>>>> Thanks Alex. I will test it in my local and share the results.
>>>>
>>>> Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
>>>>
>>>> Thanks,
>>>> Anil
>>>>
>>>
>>>
>>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>> ignite-client-tp13594p14058.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=14071&i=1>
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
> ------------------------------
> View this message in context: Re: High heap on ignite client
> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14071.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>
Re: High heap on ignite client
Posted by afedotov <al...@gmail.com>.
Hi, please provide full logs for all the nodes.
Please add -Xms=1G to the client's VM options.
Please try the code I attached before with exactly the same config for a
server and a client that was mentioned in the message. If you see that it
works in your environment then compare the config to the one of yours
step-by-step.
Kind regards,
Alex
23 июня 2017 г. 12:48 PM пользователь "Anil [via Apache Ignite Users]" <
ml+s70518n14058h32@n6.nabble.com> написал:
> Socket closed is the very frequent. May i now which causes the following
> exception ? thanks,
>
> Some more log -
>
>
> 2017-06-23 02:33:34 488 ERROR TcpDiscoverySpi:495 - Failed to send
> message: TcpDiscoveryClientHeartbeatMessage [super=TcpDiscoveryAbstractMessage
> [sndNodeId=null, id=a71a444dc51-9956f95a-3bf9-4777-9431-cda0df43ff7d,
> verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
> isClient=true]]
> java.net.SocketException: Socket is closed
> at java.net.Socket.getSendBufferSize(Socket.java:1215)
> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketSt
> ream(TcpDiscoverySpi.java:1254)
> at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToS
> ocket(TcpDiscoverySpi.java:1366)
> at org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.
> body(ClientImpl.java:1095)
> at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.
> java:62)
>
>
> Thanks,
> Anil
>
> On 23 June 2017 at 15:14, Anil <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=14058&i=0>> wrote:
>
>> HI Alex,
>>
>> i tried XX:G1NewSizePercent=30 and ignite client is getting restarted
>> very frequently, for each export operation.
>>
>> -Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
>> -XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>> -Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
>>
>> i have attached the gc logs and application logs
>>
>> I am not sure what is causing the ignite client restart for every restart.
>>
>> Do you have any suggestions ? please advice.
>>
>> Thanks,
>> Anil
>>
>>
>> On 21 June 2017 at 09:23, Anil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=14058&i=1>> wrote:
>>
>>> Thanks Alex. I will test it in my local and share the results.
>>>
>>> Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
>>>
>>> Thanks,
>>> Anil
>>>
>>
>>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
> ignite-client-tp13594p14058.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1h65@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14071.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Socket closed is the very frequent. May i now which causes the following
exception ? thanks,
Some more log -
2017-06-23 02:33:34 488 ERROR TcpDiscoverySpi:495 - Failed to send message:
TcpDiscoveryClientHeartbeatMessage [super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=a71a444dc51-9956f95a-3bf9-4777-9431-cda0df43ff7d,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]]
java.net.SocketException: Socket is closed
at java.net.Socket.getSendBufferSize(Socket.java:1215)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketStream(TcpDiscoverySpi.java:1254)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToSocket(TcpDiscoverySpi.java:1366)
at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1095)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Thanks,
Anil
On 23 June 2017 at 15:14, Anil <an...@gmail.com> wrote:
> HI Alex,
>
> i tried XX:G1NewSizePercent=30 and ignite client is getting restarted very
> frequently, for each export operation.
>
> -Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
> -XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> -Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
>
> i have attached the gc logs and application logs
>
> I am not sure what is causing the ignite client restart for every restart.
>
> Do you have any suggestions ? please advice.
>
> Thanks,
> Anil
>
>
> On 21 June 2017 at 09:23, Anil <an...@gmail.com> wrote:
>
>> Thanks Alex. I will test it in my local and share the results.
>>
>> Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
>>
>> Thanks,
>> Anil
>>
>
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
HI Alex,
i tried XX:G1NewSizePercent=30 and ignite client is getting restarted very
frequently, for each export operation.
-Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
-XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
-Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
-XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
i have attached the gc logs and application logs
I am not sure what is causing the ignite client restart for every restart.
Do you have any suggestions ? please advice.
Thanks,
Anil
On 21 June 2017 at 09:23, Anil <an...@gmail.com> wrote:
> Thanks Alex. I will test it in my local and share the results.
>
> Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
>
> Thanks,
> Anil
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Thanks Alex. I will test it in my local and share the results.
Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
Thanks,
Anil
Re: High heap on ignite client
Posted by afedotov <al...@gmail.com>.
Hi Anil,
I have not been able to reproduce your case based on the code and config
you provided.
If you provide the corresponding JFR records I will check it for any
problems.
PFA attached the code. You can run it on yourself and monitor the client's
GC activity via, for example VisualVM
I tried running a server node (ServerRunner) with the following VM settings:
-Xmx1024m -Xmx3072m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
-XX:MaxGCPauseMillis=500
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC
-XX:+AlwaysPreTouch -XX:+PrintFlagsFinal
The client node was run with:
-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
-XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC
-XX:+DisableExplicitGC -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause
-XX:+PrintGCDetails
-XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+HeapDumpAfterFullGC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
-XX:+PrintFlagsFinal
Kind regards,
Alex.
On Tue, Jun 20, 2017 at 8:00 AM, Anil [via Apache Ignite Users] <
ml+s70518n13980h14@n6.nabble.com> wrote:
> Hi Alex,
>
> Thanks for the suggestion Alex. I will try with new setting. thanks.
>
> I have attached the gc client file.
>
> Did you find anything around jdbc issue ? I have put a debug point in
> GridReduceQueryExecutor at resIter = res.iterator(); and res object holding
> all the records.
>
> Thanks
>
> On 19 June 2017 at 18:50, Alexander Fedotov <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=13980&i=0>> wrote:
>
>> I don't see anything wrong with your config.
>> Could you please provide C:/Anil/dumps/gc-client.log
>> There should be a reason for objects not being collected during GC.
>>
>> Just one more thing, try replacing -XX:NewSize=512m with
>> -XX:G1NewSizePercent=30
>> XX:NewSize won't let G1GC adjusting young gen size properly.
>>
>>
>>
>> Kind regards,
>> Alex.
>>
>> On Mon, Jun 19, 2017 at 3:47 PM, afedotov <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13980&i=1>> wrote:
>>
>>> Actually, JDBC driver should extract data page by page.
>>> Need to take an in-depth look.
>>>
>>> Kind regards,
>>> Alex.
>>>
>>> On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden
>>> email] <http:///user/SendEmail.jtp?type=node&node=13956&i=0>> wrote:
>>>
>>>> HI Alex,
>>>>
>>>> I have attached the ignite client xml. 4L means 0.4 million records.
>>>> Sorry, I didn't generate JFR. But created heap dump.
>>>>
>>>> Do you agree that Jdbc driver loading everything in memory and next()
>>>> just for conversion ?
>>>>
>>>> Thanks
>>>>
>>>> On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=0>> wrote:
>>>>
>>>>> Hi Anil.
>>>>>
>>>>> Could you please also share C:/Anil/ignite-client.xml ? As well, it
>>>>> would be useful if you took JFR reports for the case with allocation
>>>>> profiling enabled.
>>>>> Just to clarify, by 4L do you mean 4 million entries?
>>>>>
>>>>> Kind regards,
>>>>> Alex.
>>>>>
>>>>> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=1>> wrote:
>>>>>
>>>>>> Thanks. I'll take a look and let you know about any findings.
>>>>>>
>>>>>> Kind regards,
>>>>>> Alex
>>>>>>
>>>>>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=2>> написал:
>>>>>>
>>>>>> Hi Alex,
>>>>>>
>>>>>> test program repository - https://github.com/adasari/t
>>>>>> est-ignite-jdbc.git
>>>>>>
>>>>>> please let us if you have any suggestions/questions. thanks.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On 15 June 2017 at 10:58, Anil <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=3>> wrote:
>>>>>>
>>>>>>> Sure. thanks
>>>>>>>
>>>>>>> On 14 June 2017 at 19:51, afedotov <[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=4>> wrote:
>>>>>>>
>>>>>>>> Hi, Anil.
>>>>>>>>
>>>>>>>> Could you please share your full code (class/method) you are using
>>>>>>>> to read data.
>>>>>>>>
>>>>>>>> Kind regards,
>>>>>>>> Alex
>>>>>>>>
>>>>>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite
>>>>>>>> Users]" <[hidden email]
>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>>>>>>>
>>>>>>>>> Do you have any advice on implementing large records export from
>>>>>>>>> ignite ?
>>>>>>>>>
>>>>>>>>> I could not use ScanQuery right as my whole application built
>>>>>>>>> around Jdbc driver and writing complex queries in scan query is very
>>>>>>>>> difficult.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>>>>>>
>>>>>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>>>>>> memory.
>>>>>>>>>>
>>>>>>>>>> if (send(nodes,
>>>>>>>>>> oldStyle ?
>>>>>>>>>> new GridQueryRequest(qryReqId,
>>>>>>>>>> r.pageSize,
>>>>>>>>>> space,
>>>>>>>>>> mapQrys,
>>>>>>>>>> topVer,
>>>>>>>>>> extraSpaces(space, qry.spaces()),
>>>>>>>>>> null,
>>>>>>>>>> timeoutMillis) :
>>>>>>>>>> new GridH2QueryRequest()
>>>>>>>>>> .requestId(qryReqId)
>>>>>>>>>> .topologyVersion(topVer)
>>>>>>>>>> .pageSize(r.pageSize)
>>>>>>>>>> .caches(qry.caches())
>>>>>>>>>> .tables(distributedJoins ?
>>>>>>>>>> qry.tables() : null)
>>>>>>>>>> .partitions(convert(partsMap))
>>>>>>>>>> .queries(mapQrys)
>>>>>>>>>> .flags(flags)
>>>>>>>>>> .timeout(timeoutMillis),
>>>>>>>>>> oldStyle && partsMap != null ? new
>>>>>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>>>>>> false)) {
>>>>>>>>>>
>>>>>>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>>>>>>
>>>>>>>>>> *// once the responses from all nodes for the query received..
>>>>>>>>>> proceed further ?*
>>>>>>>>>>
>>>>>>>>>> if (!retry) {
>>>>>>>>>> if (skipMergeTbl) {
>>>>>>>>>> List<List<?>> res = new ArrayList<>();
>>>>>>>>>>
>>>>>>>>>> // Simple UNION ALL can have multiple
>>>>>>>>>> indexes.
>>>>>>>>>> for (GridMergeIndex idx : r.idxs) {
>>>>>>>>>> Cursor cur = idx.findInStream(null,
>>>>>>>>>> null);
>>>>>>>>>>
>>>>>>>>>> while (cur.next()) {
>>>>>>>>>> Row row = cur.get();
>>>>>>>>>>
>>>>>>>>>> int cols = row.getColumnCount();
>>>>>>>>>>
>>>>>>>>>> List<Object> resRow = new
>>>>>>>>>> ArrayList<>(cols);
>>>>>>>>>>
>>>>>>>>>> for (int c = 0; c < cols; c++)
>>>>>>>>>> resRow.add(row.getValue(c).get
>>>>>>>>>> Object());
>>>>>>>>>>
>>>>>>>>>> res.add(resRow);
>>>>>>>>>> }
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> resIter = res.iterator();
>>>>>>>>>> }else {
>>>>>>>>>> // incase of split query scenario
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>>>>>>> keepPortable);
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Query cursor is iterator which does column value mapping per
>>>>>>>>>> page. But still all records of query are still in memory. correct?
>>>>>>>>>>
>>>>>>>>>> Please correct me if I am wrong. thanks.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> jvm parameters used -
>>>>>>>>>>>
>>>>>>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>>>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log
>>>>>>>>>>> -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause
>>>>>>>>>>> -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps
>>>>>>>>>>> -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC
>>>>>>>>>>> -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal
>>>>>>>>>>> -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>>>>>>>>>>>
>>>>>>>>>>> Thanks.
>>>>>>>>>>>
>>>>>>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> HI,
>>>>>>>>>>>>
>>>>>>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>>>>>>> Interator
>>>>>>>>>>>>
>>>>>>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>>>>>>
>>>>>>>>>>>> while (rs.next()){
>>>>>>>>>>>> // do operations
>>>>>>>>>>>>
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> and fetch size is 200.
>>>>>>>>>>>>
>>>>>>>>>>>> when i run export operation twice for 4 L records whole 6B is
>>>>>>>>>>>> filled up and never getting released.
>>>>>>>>>>>>
>>>>>>>>>>>> Initially i thought that operations transforting result set to
>>>>>>>>>>>> file causing the memory full. But not.
>>>>>>>>>>>>
>>>>>>>>>>>> I just did follwoing and still the memory is growing and not
>>>>>>>>>>>> getting released
>>>>>>>>>>>>
>>>>>>>>>>>> while (rs.next()){
>>>>>>>>>>>> // nothing
>>>>>>>>>>>> }
>>>>>>>>>>>>
>>>>>>>>>>>> num #instances #bytes class name
>>>>>>>>>>>> ----------------------------------------------
>>>>>>>>>>>> 1: 55072353 2408335272 [C
>>>>>>>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>>>>>>>> 3: 779006 746187792 [B
>>>>>>>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>>>>>>>> 5: 773348 259844928
>>>>>>>>>>>> net.juniper.cs.entity.InstallBase
>>>>>>>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>>>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>>>>>>>> 8: 773348 30933920
>>>>>>>>>>>> org.apache.ignite.internal.binary.BinaryObjectImpl
>>>>>>>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>>>>>>>> 10: 12427 16517632 [I
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Not sure why string objects are getting increased.
>>>>>>>>>>>>
>>>>>>>>>>>> Could you please help in understanding the issue ?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> ------------------------------
>>>>>>>>> If you reply to this email, your message will be added to the
>>>>>>>>> discussion below:
>>>>>>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>>>>>>> ignite-client-tp13594p13626.html
>>>>>>>>> To start a new topic under Apache Ignite Users, email [hidden
>>>>>>>>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>>>>>>> To unsubscribe from Apache Ignite Users, click here.
>>>>>>>>> NAML
>>>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------
>>>>>>>> View this message in context: Re: High heap on ignite client
>>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>>>>>>> Sent from the Apache Ignite Users mailing list archive
>>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> *ignite-client.xml* (2K) Download Attachment
>>>> <http://apache-ignite-users.70518.x6.nabble.com/attachment/13953/0/ignite-client.xml>
>>>>
>>>>
>>>> ------------------------------
>>>> If you reply to this email, your message will be added to the
>>>> discussion below:
>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>> ignite-client-tp13594p13953.html
>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13956&i=1>
>>>> To unsubscribe from Apache Ignite Users, click here.
>>>> NAML
>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>
>>>
>>>
>>> ------------------------------
>>> View this message in context: Re: High heap on ignite client
>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13956.html>
>>> Sent from the Apache Ignite Users mailing list archive
>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>
>>
>>
>
> *gc-client-old.log* (1M) Download Attachment
> <http://apache-ignite-users.70518.x6.nabble.com/attachment/13980/0/gc-client-old.log>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-
> tp13594p13980.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1h65@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
test-ignite-jdbc-reproducer.tar.gz (60K) <http://apache-ignite-users.70518.x6.nabble.com/attachment/13990/0/test-ignite-jdbc-reproducer.tar.gz>
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13990.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Hi Alex,
Thanks for the suggestion Alex. I will try with new setting. thanks.
I have attached the gc client file.
Did you find anything around jdbc issue ? I have put a debug point in
GridReduceQueryExecutor at resIter = res.iterator(); and res object holding
all the records.
Thanks
On 19 June 2017 at 18:50, Alexander Fedotov <al...@gmail.com>
wrote:
> I don't see anything wrong with your config.
> Could you please provide C:/Anil/dumps/gc-client.log
> There should be a reason for objects not being collected during GC.
>
> Just one more thing, try replacing -XX:NewSize=512m with
> -XX:G1NewSizePercent=30
> XX:NewSize won't let G1GC adjusting young gen size properly.
>
>
>
> Kind regards,
> Alex.
>
> On Mon, Jun 19, 2017 at 3:47 PM, afedotov <al...@gmail.com>
> wrote:
>
>> Actually, JDBC driver should extract data page by page.
>> Need to take an in-depth look.
>>
>> Kind regards,
>> Alex.
>>
>> On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden
>> email] <http:///user/SendEmail.jtp?type=node&node=13956&i=0>> wrote:
>>
>>> HI Alex,
>>>
>>> I have attached the ignite client xml. 4L means 0.4 million records.
>>> Sorry, I didn't generate JFR. But created heap dump.
>>>
>>> Do you agree that Jdbc driver loading everything in memory and next()
>>> just for conversion ?
>>>
>>> Thanks
>>>
>>> On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=0>> wrote:
>>>
>>>> Hi Anil.
>>>>
>>>> Could you please also share C:/Anil/ignite-client.xml ? As well, it
>>>> would be useful if you took JFR reports for the case with allocation
>>>> profiling enabled.
>>>> Just to clarify, by 4L do you mean 4 million entries?
>>>>
>>>> Kind regards,
>>>> Alex.
>>>>
>>>> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=1>> wrote:
>>>>
>>>>> Thanks. I'll take a look and let you know about any findings.
>>>>>
>>>>> Kind regards,
>>>>> Alex
>>>>>
>>>>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=2>> написал:
>>>>>
>>>>> Hi Alex,
>>>>>
>>>>> test program repository - https://github.com/adasari/t
>>>>> est-ignite-jdbc.git
>>>>>
>>>>> please let us if you have any suggestions/questions. thanks.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 15 June 2017 at 10:58, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=3>> wrote:
>>>>>
>>>>>> Sure. thanks
>>>>>>
>>>>>> On 14 June 2017 at 19:51, afedotov <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=4>> wrote:
>>>>>>
>>>>>>> Hi, Anil.
>>>>>>>
>>>>>>> Could you please share your full code (class/method) you are using
>>>>>>> to read data.
>>>>>>>
>>>>>>> Kind regards,
>>>>>>> Alex
>>>>>>>
>>>>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite
>>>>>>> Users]" <[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>>>>>>
>>>>>>>> Do you have any advice on implementing large records export from
>>>>>>>> ignite ?
>>>>>>>>
>>>>>>>> I could not use ScanQuery right as my whole application built
>>>>>>>> around Jdbc driver and writing complex queries in scan query is very
>>>>>>>> difficult.
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>>>>>
>>>>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>>>>> memory.
>>>>>>>>>
>>>>>>>>> if (send(nodes,
>>>>>>>>> oldStyle ?
>>>>>>>>> new GridQueryRequest(qryReqId,
>>>>>>>>> r.pageSize,
>>>>>>>>> space,
>>>>>>>>> mapQrys,
>>>>>>>>> topVer,
>>>>>>>>> extraSpaces(space, qry.spaces()),
>>>>>>>>> null,
>>>>>>>>> timeoutMillis) :
>>>>>>>>> new GridH2QueryRequest()
>>>>>>>>> .requestId(qryReqId)
>>>>>>>>> .topologyVersion(topVer)
>>>>>>>>> .pageSize(r.pageSize)
>>>>>>>>> .caches(qry.caches())
>>>>>>>>> .tables(distributedJoins ?
>>>>>>>>> qry.tables() : null)
>>>>>>>>> .partitions(convert(partsMap))
>>>>>>>>> .queries(mapQrys)
>>>>>>>>> .flags(flags)
>>>>>>>>> .timeout(timeoutMillis),
>>>>>>>>> oldStyle && partsMap != null ? new
>>>>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>>>>> false)) {
>>>>>>>>>
>>>>>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>>>>>
>>>>>>>>> *// once the responses from all nodes for the query received..
>>>>>>>>> proceed further ?*
>>>>>>>>>
>>>>>>>>> if (!retry) {
>>>>>>>>> if (skipMergeTbl) {
>>>>>>>>> List<List<?>> res = new ArrayList<>();
>>>>>>>>>
>>>>>>>>> // Simple UNION ALL can have multiple
>>>>>>>>> indexes.
>>>>>>>>> for (GridMergeIndex idx : r.idxs) {
>>>>>>>>> Cursor cur = idx.findInStream(null,
>>>>>>>>> null);
>>>>>>>>>
>>>>>>>>> while (cur.next()) {
>>>>>>>>> Row row = cur.get();
>>>>>>>>>
>>>>>>>>> int cols = row.getColumnCount();
>>>>>>>>>
>>>>>>>>> List<Object> resRow = new
>>>>>>>>> ArrayList<>(cols);
>>>>>>>>>
>>>>>>>>> for (int c = 0; c < cols; c++)
>>>>>>>>> resRow.add(row.getValue(c).get
>>>>>>>>> Object());
>>>>>>>>>
>>>>>>>>> res.add(resRow);
>>>>>>>>> }
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> resIter = res.iterator();
>>>>>>>>> }else {
>>>>>>>>> // incase of split query scenario
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>>>>>> keepPortable);
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Query cursor is iterator which does column value mapping per page.
>>>>>>>>> But still all records of query are still in memory. correct?
>>>>>>>>>
>>>>>>>>> Please correct me if I am wrong. thanks.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> jvm parameters used -
>>>>>>>>>>
>>>>>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log
>>>>>>>>>> -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCCause
>>>>>>>>>> -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy -XX:+PrintGCTimeStamps
>>>>>>>>>> -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC -XX:+ScavengeBeforeFullGC
>>>>>>>>>> -XX:+DisableExplicitGC -XX:+AlwaysPreTouch -XX:+PrintFlagsFinal
>>>>>>>>>> -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>>>>>>>>>>
>>>>>>>>>> Thanks.
>>>>>>>>>>
>>>>>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>>>>>
>>>>>>>>>>> HI,
>>>>>>>>>>>
>>>>>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>>>>>> Interator
>>>>>>>>>>>
>>>>>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>>>>>
>>>>>>>>>>> while (rs.next()){
>>>>>>>>>>> // do operations
>>>>>>>>>>>
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> and fetch size is 200.
>>>>>>>>>>>
>>>>>>>>>>> when i run export operation twice for 4 L records whole 6B is
>>>>>>>>>>> filled up and never getting released.
>>>>>>>>>>>
>>>>>>>>>>> Initially i thought that operations transforting result set to
>>>>>>>>>>> file causing the memory full. But not.
>>>>>>>>>>>
>>>>>>>>>>> I just did follwoing and still the memory is growing and not
>>>>>>>>>>> getting released
>>>>>>>>>>>
>>>>>>>>>>> while (rs.next()){
>>>>>>>>>>> // nothing
>>>>>>>>>>> }
>>>>>>>>>>>
>>>>>>>>>>> num #instances #bytes class name
>>>>>>>>>>> ----------------------------------------------
>>>>>>>>>>> 1: 55072353 2408335272 [C
>>>>>>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>>>>>>> 3: 779006 746187792 [B
>>>>>>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>>>>>>> 5: 773348 259844928 net.juniper.cs.entity.Install
>>>>>>>>>>> Base
>>>>>>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>>>>>>> 8: 773348 30933920
>>>>>>>>>>> org.apache.ignite.internal.binary.BinaryObjectImpl
>>>>>>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>>>>>>> 10: 12427 16517632 [I
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> Not sure why string objects are getting increased.
>>>>>>>>>>>
>>>>>>>>>>> Could you please help in understanding the issue ?
>>>>>>>>>>>
>>>>>>>>>>> Thanks
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ------------------------------
>>>>>>>> If you reply to this email, your message will be added to the
>>>>>>>> discussion below:
>>>>>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>>>>>> ignite-client-tp13594p13626.html
>>>>>>>> To start a new topic under Apache Ignite Users, email [hidden
>>>>>>>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>>>>>> To unsubscribe from Apache Ignite Users, click here.
>>>>>>>> NAML
>>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>>>>
>>>>>>>
>>>>>>> ------------------------------
>>>>>>> View this message in context: Re: High heap on ignite client
>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>>>>>> Sent from the Apache Ignite Users mailing list archive
>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> *ignite-client.xml* (2K) Download Attachment
>>> <http://apache-ignite-users.70518.x6.nabble.com/attachment/13953/0/ignite-client.xml>
>>>
>>>
>>> ------------------------------
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>> ignite-client-tp13594p13953.html
>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13956&i=1>
>>> To unsubscribe from Apache Ignite Users, click here.
>>> NAML
>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>>
>> ------------------------------
>> View this message in context: Re: High heap on ignite client
>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13956.html>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>
Re: High heap on ignite client
Posted by Alexander Fedotov <al...@gmail.com>.
I don't see anything wrong with your config.
Could you please provide C:/Anil/dumps/gc-client.log
There should be a reason for objects not being collected during GC.
Just one more thing, try replacing -XX:NewSize=512m with
-XX:G1NewSizePercent=30
XX:NewSize won't let G1GC adjusting young gen size properly.
Kind regards,
Alex.
On Mon, Jun 19, 2017 at 3:47 PM, afedotov <al...@gmail.com>
wrote:
> Actually, JDBC driver should extract data page by page.
> Need to take an in-depth look.
>
> Kind regards,
> Alex.
>
> On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=13956&i=0>> wrote:
>
>> HI Alex,
>>
>> I have attached the ignite client xml. 4L means 0.4 million records.
>> Sorry, I didn't generate JFR. But created heap dump.
>>
>> Do you agree that Jdbc driver loading everything in memory and next()
>> just for conversion ?
>>
>> Thanks
>>
>> On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13953&i=0>> wrote:
>>
>>> Hi Anil.
>>>
>>> Could you please also share C:/Anil/ignite-client.xml ? As well, it
>>> would be useful if you took JFR reports for the case with allocation
>>> profiling enabled.
>>> Just to clarify, by 4L do you mean 4 million entries?
>>>
>>> Kind regards,
>>> Alex.
>>>
>>> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=1>> wrote:
>>>
>>>> Thanks. I'll take a look and let you know about any findings.
>>>>
>>>> Kind regards,
>>>> Alex
>>>>
>>>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=2>> написал:
>>>>
>>>> Hi Alex,
>>>>
>>>> test program repository - https://github.com/adasari/t
>>>> est-ignite-jdbc.git
>>>>
>>>> please let us if you have any suggestions/questions. thanks.
>>>>
>>>> Thanks
>>>>
>>>> On 15 June 2017 at 10:58, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=3>> wrote:
>>>>
>>>>> Sure. thanks
>>>>>
>>>>> On 14 June 2017 at 19:51, afedotov <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=4>> wrote:
>>>>>
>>>>>> Hi, Anil.
>>>>>>
>>>>>> Could you please share your full code (class/method) you are using to
>>>>>> read data.
>>>>>>
>>>>>> Kind regards,
>>>>>> Alex
>>>>>>
>>>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]"
>>>>>> <[hidden email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>>
>>>>>> написал:
>>>>>>
>>>>>>> Do you have any advice on implementing large records export from
>>>>>>> ignite ?
>>>>>>>
>>>>>>> I could not use ScanQuery right as my whole application built around
>>>>>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>>>>
>>>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>>>> memory.
>>>>>>>>
>>>>>>>> if (send(nodes,
>>>>>>>> oldStyle ?
>>>>>>>> new GridQueryRequest(qryReqId,
>>>>>>>> r.pageSize,
>>>>>>>> space,
>>>>>>>> mapQrys,
>>>>>>>> topVer,
>>>>>>>> extraSpaces(space, qry.spaces()),
>>>>>>>> null,
>>>>>>>> timeoutMillis) :
>>>>>>>> new GridH2QueryRequest()
>>>>>>>> .requestId(qryReqId)
>>>>>>>> .topologyVersion(topVer)
>>>>>>>> .pageSize(r.pageSize)
>>>>>>>> .caches(qry.caches())
>>>>>>>> .tables(distributedJoins ? qry.tables()
>>>>>>>> : null)
>>>>>>>> .partitions(convert(partsMap))
>>>>>>>> .queries(mapQrys)
>>>>>>>> .flags(flags)
>>>>>>>> .timeout(timeoutMillis),
>>>>>>>> oldStyle && partsMap != null ? new
>>>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>>>> false)) {
>>>>>>>>
>>>>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>>>>
>>>>>>>> *// once the responses from all nodes for the query received..
>>>>>>>> proceed further ?*
>>>>>>>>
>>>>>>>> if (!retry) {
>>>>>>>> if (skipMergeTbl) {
>>>>>>>> List<List<?>> res = new ArrayList<>();
>>>>>>>>
>>>>>>>> // Simple UNION ALL can have multiple
>>>>>>>> indexes.
>>>>>>>> for (GridMergeIndex idx : r.idxs) {
>>>>>>>> Cursor cur = idx.findInStream(null,
>>>>>>>> null);
>>>>>>>>
>>>>>>>> while (cur.next()) {
>>>>>>>> Row row = cur.get();
>>>>>>>>
>>>>>>>> int cols = row.getColumnCount();
>>>>>>>>
>>>>>>>> List<Object> resRow = new
>>>>>>>> ArrayList<>(cols);
>>>>>>>>
>>>>>>>> for (int c = 0; c < cols; c++)
>>>>>>>> resRow.add(row.getValue(c).get
>>>>>>>> Object());
>>>>>>>>
>>>>>>>> res.add(resRow);
>>>>>>>> }
>>>>>>>> }
>>>>>>>>
>>>>>>>> resIter = res.iterator();
>>>>>>>> }else {
>>>>>>>> // incase of split query scenario
>>>>>>>> }
>>>>>>>>
>>>>>>>> }
>>>>>>>>
>>>>>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>>>>> keepPortable);
>>>>>>>>
>>>>>>>>
>>>>>>>> Query cursor is iterator which does column value mapping per page.
>>>>>>>> But still all records of query are still in memory. correct?
>>>>>>>>
>>>>>>>> Please correct me if I am wrong. thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>>
>>>>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>> jvm parameters used -
>>>>>>>>>
>>>>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>>>>>> /heapdump-client.hprof
>>>>>>>>>
>>>>>>>>> Thanks.
>>>>>>>>>
>>>>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>>>>
>>>>>>>>>> HI,
>>>>>>>>>>
>>>>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>>>>> Interator
>>>>>>>>>>
>>>>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>>>>
>>>>>>>>>> while (rs.next()){
>>>>>>>>>> // do operations
>>>>>>>>>>
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> and fetch size is 200.
>>>>>>>>>>
>>>>>>>>>> when i run export operation twice for 4 L records whole 6B is
>>>>>>>>>> filled up and never getting released.
>>>>>>>>>>
>>>>>>>>>> Initially i thought that operations transforting result set to
>>>>>>>>>> file causing the memory full. But not.
>>>>>>>>>>
>>>>>>>>>> I just did follwoing and still the memory is growing and not
>>>>>>>>>> getting released
>>>>>>>>>>
>>>>>>>>>> while (rs.next()){
>>>>>>>>>> // nothing
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>> num #instances #bytes class name
>>>>>>>>>> ----------------------------------------------
>>>>>>>>>> 1: 55072353 2408335272 [C
>>>>>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>>>>>> 3: 779006 746187792 [B
>>>>>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>>>>>> 5: 773348 259844928 net.juniper.cs.entity.Install
>>>>>>>>>> Base
>>>>>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>>>>>>> nary.BinaryObjectImpl
>>>>>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>>>>>> 10: 12427 16517632 [I
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Not sure why string objects are getting increased.
>>>>>>>>>>
>>>>>>>>>> Could you please help in understanding the issue ?
>>>>>>>>>>
>>>>>>>>>> Thanks
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ------------------------------
>>>>>>> If you reply to this email, your message will be added to the
>>>>>>> discussion below:
>>>>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>>>>> ignite-client-tp13594p13626.html
>>>>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>>>>> To unsubscribe from Apache Ignite Users, click here.
>>>>>>> NAML
>>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>>>
>>>>>>
>>>>>> ------------------------------
>>>>>> View this message in context: Re: High heap on ignite client
>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>>>>> Sent from the Apache Ignite Users mailing list archive
>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>
>>
>> *ignite-client.xml* (2K) Download Attachment
>> <http://apache-ignite-users.70518.x6.nabble.com/attachment/13953/0/ignite-client.xml>
>>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>> ignite-client-tp13594p13953.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13956&i=1>
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>
> ------------------------------
> View this message in context: Re: High heap on ignite client
> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13956.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>
Re: High heap on ignite client
Posted by afedotov <al...@gmail.com>.
Actually, JDBC driver should extract data page by page.
Need to take an in-depth look.
Kind regards,
Alex.
On Mon, Jun 19, 2017 at 3:08 PM, Anil [via Apache Ignite Users] <
ml+s70518n13953h77@n6.nabble.com> wrote:
> HI Alex,
>
> I have attached the ignite client xml. 4L means 0.4 million records.
> Sorry, I didn't generate JFR. But created heap dump.
>
> Do you agree that Jdbc driver loading everything in memory and next() just
> for conversion ?
>
> Thanks
>
> On 19 June 2017 at 17:16, Alexander Fedotov <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=13953&i=0>> wrote:
>
>> Hi Anil.
>>
>> Could you please also share C:/Anil/ignite-client.xml ? As well, it
>> would be useful if you took JFR reports for the case with allocation
>> profiling enabled.
>> Just to clarify, by 4L do you mean 4 million entries?
>>
>> Kind regards,
>> Alex.
>>
>> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13953&i=1>> wrote:
>>
>>> Thanks. I'll take a look and let you know about any findings.
>>>
>>> Kind regards,
>>> Alex
>>>
>>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=2>> написал:
>>>
>>> Hi Alex,
>>>
>>> test program repository - https://github.com/adasari/t
>>> est-ignite-jdbc.git
>>>
>>> please let us if you have any suggestions/questions. thanks.
>>>
>>> Thanks
>>>
>>> On 15 June 2017 at 10:58, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=3>> wrote:
>>>
>>>> Sure. thanks
>>>>
>>>> On 14 June 2017 at 19:51, afedotov <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13953&i=4>> wrote:
>>>>
>>>>> Hi, Anil.
>>>>>
>>>>> Could you please share your full code (class/method) you are using to
>>>>> read data.
>>>>>
>>>>> Kind regards,
>>>>> Alex
>>>>>
>>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>>>>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>>>>
>>>>>> Do you have any advice on implementing large records export from
>>>>>> ignite ?
>>>>>>
>>>>>> I could not use ScanQuery right as my whole application built around
>>>>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>>>
>>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>>> memory.
>>>>>>>
>>>>>>> if (send(nodes,
>>>>>>> oldStyle ?
>>>>>>> new GridQueryRequest(qryReqId,
>>>>>>> r.pageSize,
>>>>>>> space,
>>>>>>> mapQrys,
>>>>>>> topVer,
>>>>>>> extraSpaces(space, qry.spaces()),
>>>>>>> null,
>>>>>>> timeoutMillis) :
>>>>>>> new GridH2QueryRequest()
>>>>>>> .requestId(qryReqId)
>>>>>>> .topologyVersion(topVer)
>>>>>>> .pageSize(r.pageSize)
>>>>>>> .caches(qry.caches())
>>>>>>> .tables(distributedJoins ? qry.tables()
>>>>>>> : null)
>>>>>>> .partitions(convert(partsMap))
>>>>>>> .queries(mapQrys)
>>>>>>> .flags(flags)
>>>>>>> .timeout(timeoutMillis),
>>>>>>> oldStyle && partsMap != null ? new
>>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>>> false)) {
>>>>>>>
>>>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>>>
>>>>>>> *// once the responses from all nodes for the query received..
>>>>>>> proceed further ?*
>>>>>>>
>>>>>>> if (!retry) {
>>>>>>> if (skipMergeTbl) {
>>>>>>> List<List<?>> res = new ArrayList<>();
>>>>>>>
>>>>>>> // Simple UNION ALL can have multiple
>>>>>>> indexes.
>>>>>>> for (GridMergeIndex idx : r.idxs) {
>>>>>>> Cursor cur = idx.findInStream(null,
>>>>>>> null);
>>>>>>>
>>>>>>> while (cur.next()) {
>>>>>>> Row row = cur.get();
>>>>>>>
>>>>>>> int cols = row.getColumnCount();
>>>>>>>
>>>>>>> List<Object> resRow = new
>>>>>>> ArrayList<>(cols);
>>>>>>>
>>>>>>> for (int c = 0; c < cols; c++)
>>>>>>> resRow.add(row.getValue(c).get
>>>>>>> Object());
>>>>>>>
>>>>>>> res.add(resRow);
>>>>>>> }
>>>>>>> }
>>>>>>>
>>>>>>> resIter = res.iterator();
>>>>>>> }else {
>>>>>>> // incase of split query scenario
>>>>>>> }
>>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>>>> keepPortable);
>>>>>>>
>>>>>>>
>>>>>>> Query cursor is iterator which does column value mapping per page.
>>>>>>> But still all records of query are still in memory. correct?
>>>>>>>
>>>>>>> Please correct me if I am wrong. thanks.
>>>>>>>
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>> jvm parameters used -
>>>>>>>>
>>>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>>>>> /heapdump-client.hprof
>>>>>>>>
>>>>>>>> Thanks.
>>>>>>>>
>>>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>>>
>>>>>>>>> HI,
>>>>>>>>>
>>>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>>>> Interator
>>>>>>>>>
>>>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>>>
>>>>>>>>> while (rs.next()){
>>>>>>>>> // do operations
>>>>>>>>>
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> and fetch size is 200.
>>>>>>>>>
>>>>>>>>> when i run export operation twice for 4 L records whole 6B is
>>>>>>>>> filled up and never getting released.
>>>>>>>>>
>>>>>>>>> Initially i thought that operations transforting result set to
>>>>>>>>> file causing the memory full. But not.
>>>>>>>>>
>>>>>>>>> I just did follwoing and still the memory is growing and not
>>>>>>>>> getting released
>>>>>>>>>
>>>>>>>>> while (rs.next()){
>>>>>>>>> // nothing
>>>>>>>>> }
>>>>>>>>>
>>>>>>>>> num #instances #bytes class name
>>>>>>>>> ----------------------------------------------
>>>>>>>>> 1: 55072353 2408335272 [C
>>>>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>>>>> 3: 779006 746187792 [B
>>>>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>>>>> 5: 773348 259844928 net.juniper.cs.entity.Install
>>>>>>>>> Base
>>>>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>>>>>> nary.BinaryObjectImpl
>>>>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>>>>> 10: 12427 16517632 [I
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Not sure why string objects are getting increased.
>>>>>>>>>
>>>>>>>>> Could you please help in understanding the issue ?
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> ------------------------------
>>>>>> If you reply to this email, your message will be added to the
>>>>>> discussion below:
>>>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>>>> ignite-client-tp13594p13626.html
>>>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>>>> To unsubscribe from Apache Ignite Users, click here.
>>>>>> NAML
>>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>>
>>>>>
>>>>> ------------------------------
>>>>> View this message in context: Re: High heap on ignite client
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>>>> Sent from the Apache Ignite Users mailing list archive
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>>
>>>>
>>>>
>>>
>>>
>>
>
> *ignite-client.xml* (2K) Download Attachment
> <http://apache-ignite-users.70518.x6.nabble.com/attachment/13953/0/ignite-client.xml>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-
> tp13594p13953.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1h65@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13956.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
HI Alex,
I have attached the ignite client xml. 4L means 0.4 million records. Sorry,
I didn't generate JFR. But created heap dump.
Do you agree that Jdbc driver loading everything in memory and next() just
for conversion ?
Thanks
On 19 June 2017 at 17:16, Alexander Fedotov <al...@gmail.com>
wrote:
> Hi Anil.
>
> Could you please also share C:/Anil/ignite-client.xml ? As well, it would
> be useful if you took JFR reports for the case with allocation profiling
> enabled.
> Just to clarify, by 4L do you mean 4 million entries?
>
> Kind regards,
> Alex.
>
> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <
> alexander.fedotoff@gmail.com> wrote:
>
>> Thanks. I'll take a look and let you know about any findings.
>>
>> Kind regards,
>> Alex
>>
>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <an...@gmail.com> написал:
>>
>> Hi Alex,
>>
>> test program repository - https://github.com/adasari/test-ignite-jdbc.git
>>
>> please let us if you have any suggestions/questions. thanks.
>>
>> Thanks
>>
>> On 15 June 2017 at 10:58, Anil <an...@gmail.com> wrote:
>>
>>> Sure. thanks
>>>
>>> On 14 June 2017 at 19:51, afedotov <al...@gmail.com> wrote:
>>>
>>>> Hi, Anil.
>>>>
>>>> Could you please share your full code (class/method) you are using to
>>>> read data.
>>>>
>>>> Kind regards,
>>>> Alex
>>>>
>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>>>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>>>
>>>>> Do you have any advice on implementing large records export from
>>>>> ignite ?
>>>>>
>>>>> I could not use ScanQuery right as my whole application built around
>>>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>>
>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>> memory.
>>>>>>
>>>>>> if (send(nodes,
>>>>>> oldStyle ?
>>>>>> new GridQueryRequest(qryReqId,
>>>>>> r.pageSize,
>>>>>> space,
>>>>>> mapQrys,
>>>>>> topVer,
>>>>>> extraSpaces(space, qry.spaces()),
>>>>>> null,
>>>>>> timeoutMillis) :
>>>>>> new GridH2QueryRequest()
>>>>>> .requestId(qryReqId)
>>>>>> .topologyVersion(topVer)
>>>>>> .pageSize(r.pageSize)
>>>>>> .caches(qry.caches())
>>>>>> .tables(distributedJoins ? qry.tables() :
>>>>>> null)
>>>>>> .partitions(convert(partsMap))
>>>>>> .queries(mapQrys)
>>>>>> .flags(flags)
>>>>>> .timeout(timeoutMillis),
>>>>>> oldStyle && partsMap != null ? new
>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>> false)) {
>>>>>>
>>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>>
>>>>>> *// once the responses from all nodes for the query received..
>>>>>> proceed further ?*
>>>>>>
>>>>>> if (!retry) {
>>>>>> if (skipMergeTbl) {
>>>>>> List<List<?>> res = new ArrayList<>();
>>>>>>
>>>>>> // Simple UNION ALL can have multiple indexes.
>>>>>> for (GridMergeIndex idx : r.idxs) {
>>>>>> Cursor cur = idx.findInStream(null, null);
>>>>>>
>>>>>> while (cur.next()) {
>>>>>> Row row = cur.get();
>>>>>>
>>>>>> int cols = row.getColumnCount();
>>>>>>
>>>>>> List<Object> resRow = new
>>>>>> ArrayList<>(cols);
>>>>>>
>>>>>> for (int c = 0; c < cols; c++)
>>>>>> resRow.add(row.getValue(c).get
>>>>>> Object());
>>>>>>
>>>>>> res.add(resRow);
>>>>>> }
>>>>>> }
>>>>>>
>>>>>> resIter = res.iterator();
>>>>>> }else {
>>>>>> // incase of split query scenario
>>>>>> }
>>>>>>
>>>>>> }
>>>>>>
>>>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>>> keepPortable);
>>>>>>
>>>>>>
>>>>>> Query cursor is iterator which does column value mapping per page.
>>>>>> But still all records of query are still in memory. correct?
>>>>>>
>>>>>> Please correct me if I am wrong. thanks.
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>>
>>>>>>>
>>>>>>> jvm parameters used -
>>>>>>>
>>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>>>> /heapdump-client.hprof
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>>
>>>>>>>> HI,
>>>>>>>>
>>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>>> Interator
>>>>>>>>
>>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>>
>>>>>>>> while (rs.next()){
>>>>>>>> // do operations
>>>>>>>>
>>>>>>>> }
>>>>>>>>
>>>>>>>> and fetch size is 200.
>>>>>>>>
>>>>>>>> when i run export operation twice for 4 L records whole 6B is
>>>>>>>> filled up and never getting released.
>>>>>>>>
>>>>>>>> Initially i thought that operations transforting result set to file
>>>>>>>> causing the memory full. But not.
>>>>>>>>
>>>>>>>> I just did follwoing and still the memory is growing and not
>>>>>>>> getting released
>>>>>>>>
>>>>>>>> while (rs.next()){
>>>>>>>> // nothing
>>>>>>>> }
>>>>>>>>
>>>>>>>> num #instances #bytes class name
>>>>>>>> ----------------------------------------------
>>>>>>>> 1: 55072353 2408335272 [C
>>>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>>>> 3: 779006 746187792 [B
>>>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>>>> 5: 773348 259844928 net.juniper.cs.entity.Install
>>>>>>>> Base
>>>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>>>>> nary.BinaryObjectImpl
>>>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>>>> 10: 12427 16517632 [I
>>>>>>>>
>>>>>>>>
>>>>>>>> Not sure why string objects are getting increased.
>>>>>>>>
>>>>>>>> Could you please help in understanding the issue ?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> ------------------------------
>>>>> If you reply to this email, your message will be added to the
>>>>> discussion below:
>>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>>> ignite-client-tp13594p13626.html
>>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>>> To unsubscribe from Apache Ignite Users, click here.
>>>>> NAML
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>
>>>>
>>>> ------------------------------
>>>> View this message in context: Re: High heap on ignite client
>>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>>> Sent from the Apache Ignite Users mailing list archive
>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>
>>>
>>>
>>
>>
>
Re: High heap on ignite client
Posted by Alexander Fedotov <al...@gmail.com>.
Hi Anil.
Could you please also share C:/Anil/ignite-client.xml ? As well, it would
be useful if you took JFR reports for the case with allocation profiling
enabled.
Just to clarify, by 4L do you mean 4 million entries?
Kind regards,
Alex.
On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <
alexander.fedotoff@gmail.com> wrote:
> Thanks. I'll take a look and let you know about any findings.
>
> Kind regards,
> Alex
>
> 18 июня 2017 г. 3:33 PM пользователь "Anil" <an...@gmail.com> написал:
>
> Hi Alex,
>
> test program repository - https://github.com/adasari/test-ignite-jdbc.git
>
> please let us if you have any suggestions/questions. thanks.
>
> Thanks
>
> On 15 June 2017 at 10:58, Anil <an...@gmail.com> wrote:
>
>> Sure. thanks
>>
>> On 14 June 2017 at 19:51, afedotov <al...@gmail.com> wrote:
>>
>>> Hi, Anil.
>>>
>>> Could you please share your full code (class/method) you are using to
>>> read data.
>>>
>>> Kind regards,
>>> Alex
>>>
>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>>
>>>> Do you have any advice on implementing large records export from ignite
>>>> ?
>>>>
>>>> I could not use ScanQuery right as my whole application built around
>>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>>
>>>> Thanks
>>>>
>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>
>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>> reducer. It means when exporting large number of records, all data is in
>>>>> memory.
>>>>>
>>>>> if (send(nodes,
>>>>> oldStyle ?
>>>>> new GridQueryRequest(qryReqId,
>>>>> r.pageSize,
>>>>> space,
>>>>> mapQrys,
>>>>> topVer,
>>>>> extraSpaces(space, qry.spaces()),
>>>>> null,
>>>>> timeoutMillis) :
>>>>> new GridH2QueryRequest()
>>>>> .requestId(qryReqId)
>>>>> .topologyVersion(topVer)
>>>>> .pageSize(r.pageSize)
>>>>> .caches(qry.caches())
>>>>> .tables(distributedJoins ? qry.tables() :
>>>>> null)
>>>>> .partitions(convert(partsMap))
>>>>> .queries(mapQrys)
>>>>> .flags(flags)
>>>>> .timeout(timeoutMillis),
>>>>> oldStyle && partsMap != null ? new
>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>> false)) {
>>>>>
>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>
>>>>> *// once the responses from all nodes for the query received.. proceed
>>>>> further ?*
>>>>>
>>>>> if (!retry) {
>>>>> if (skipMergeTbl) {
>>>>> List<List<?>> res = new ArrayList<>();
>>>>>
>>>>> // Simple UNION ALL can have multiple indexes.
>>>>> for (GridMergeIndex idx : r.idxs) {
>>>>> Cursor cur = idx.findInStream(null, null);
>>>>>
>>>>> while (cur.next()) {
>>>>> Row row = cur.get();
>>>>>
>>>>> int cols = row.getColumnCount();
>>>>>
>>>>> List<Object> resRow = new
>>>>> ArrayList<>(cols);
>>>>>
>>>>> for (int c = 0; c < cols; c++)
>>>>> resRow.add(row.getValue(c).get
>>>>> Object());
>>>>>
>>>>> res.add(resRow);
>>>>> }
>>>>> }
>>>>>
>>>>> resIter = res.iterator();
>>>>> }else {
>>>>> // incase of split query scenario
>>>>> }
>>>>>
>>>>> }
>>>>>
>>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>> keepPortable);
>>>>>
>>>>>
>>>>> Query cursor is iterator which does column value mapping per page. But
>>>>> still all records of query are still in memory. correct?
>>>>>
>>>>> Please correct me if I am wrong. thanks.
>>>>>
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>
>>>>>>
>>>>>> jvm parameters used -
>>>>>>
>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>>> /heapdump-client.hprof
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>
>>>>>>> HI,
>>>>>>>
>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>> Interator
>>>>>>>
>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>
>>>>>>> while (rs.next()){
>>>>>>> // do operations
>>>>>>>
>>>>>>> }
>>>>>>>
>>>>>>> and fetch size is 200.
>>>>>>>
>>>>>>> when i run export operation twice for 4 L records whole 6B is filled
>>>>>>> up and never getting released.
>>>>>>>
>>>>>>> Initially i thought that operations transforting result set to file
>>>>>>> causing the memory full. But not.
>>>>>>>
>>>>>>> I just did follwoing and still the memory is growing and not getting
>>>>>>> released
>>>>>>>
>>>>>>> while (rs.next()){
>>>>>>> // nothing
>>>>>>> }
>>>>>>>
>>>>>>> num #instances #bytes class name
>>>>>>> ----------------------------------------------
>>>>>>> 1: 55072353 2408335272 [C
>>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>>> 3: 779006 746187792 [B
>>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>>> 5: 773348 259844928 net.juniper.cs.entity.Install
>>>>>>> Base
>>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>>>> nary.BinaryObjectImpl
>>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>>> 10: 12427 16517632 [I
>>>>>>>
>>>>>>>
>>>>>>> Not sure why string objects are getting increased.
>>>>>>>
>>>>>>> Could you please help in understanding the issue ?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>>
>>>> ------------------------------
>>>> If you reply to this email, your message will be added to the
>>>> discussion below:
>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>> ignite-client-tp13594p13626.html
>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>> To unsubscribe from Apache Ignite Users, click here.
>>>> NAML
>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>
>>>
>>> ------------------------------
>>> View this message in context: Re: High heap on ignite client
>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>> Sent from the Apache Ignite Users mailing list archive
>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>
>>
>>
>
>
Re: High heap on ignite client
Posted by Alexander Fedotov <al...@gmail.com>.
Thanks. I'll take a look and let you know about any findings.
Kind regards,
Alex
18 июня 2017 г. 3:33 PM пользователь "Anil" <an...@gmail.com> написал:
Hi Alex,
test program repository - https://github.com/adasari/test-ignite-jdbc.git
please let us if you have any suggestions/questions. thanks.
Thanks
On 15 June 2017 at 10:58, Anil <an...@gmail.com> wrote:
> Sure. thanks
>
> On 14 June 2017 at 19:51, afedotov <al...@gmail.com> wrote:
>
>> Hi, Anil.
>>
>> Could you please share your full code (class/method) you are using to
>> read data.
>>
>> Kind regards,
>> Alex
>>
>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>
>>> Do you have any advice on implementing large records export from ignite ?
>>>
>>> I could not use ScanQuery right as my whole application built around
>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>
>>> Thanks
>>>
>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>
>>>> I understand from the code that there is no cursor from h2 db (or
>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>> reducer. It means when exporting large number of records, all data is in
>>>> memory.
>>>>
>>>> if (send(nodes,
>>>> oldStyle ?
>>>> new GridQueryRequest(qryReqId,
>>>> r.pageSize,
>>>> space,
>>>> mapQrys,
>>>> topVer,
>>>> extraSpaces(space, qry.spaces()),
>>>> null,
>>>> timeoutMillis) :
>>>> new GridH2QueryRequest()
>>>> .requestId(qryReqId)
>>>> .topologyVersion(topVer)
>>>> .pageSize(r.pageSize)
>>>> .caches(qry.caches())
>>>> .tables(distributedJoins ? qry.tables() :
>>>> null)
>>>> .partitions(convert(partsMap))
>>>> .queries(mapQrys)
>>>> .flags(flags)
>>>> .timeout(timeoutMillis),
>>>> oldStyle && partsMap != null ? new
>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>> false)) {
>>>>
>>>> awaitAllReplies(r, nodes, cancel);
>>>>
>>>> *// once the responses from all nodes for the query received.. proceed
>>>> further ?*
>>>>
>>>> if (!retry) {
>>>> if (skipMergeTbl) {
>>>> List<List<?>> res = new ArrayList<>();
>>>>
>>>> // Simple UNION ALL can have multiple indexes.
>>>> for (GridMergeIndex idx : r.idxs) {
>>>> Cursor cur = idx.findInStream(null, null);
>>>>
>>>> while (cur.next()) {
>>>> Row row = cur.get();
>>>>
>>>> int cols = row.getColumnCount();
>>>>
>>>> List<Object> resRow = new
>>>> ArrayList<>(cols);
>>>>
>>>> for (int c = 0; c < cols; c++)
>>>> resRow.add(row.getValue(c).get
>>>> Object());
>>>>
>>>> res.add(resRow);
>>>> }
>>>> }
>>>>
>>>> resIter = res.iterator();
>>>> }else {
>>>> // incase of split query scenario
>>>> }
>>>>
>>>> }
>>>>
>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>> keepPortable);
>>>>
>>>>
>>>> Query cursor is iterator which does column value mapping per page. But
>>>> still all records of query are still in memory. correct?
>>>>
>>>> Please correct me if I am wrong. thanks.
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>
>>>>>
>>>>> jvm parameters used -
>>>>>
>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>> /heapdump-client.hprof
>>>>>
>>>>> Thanks.
>>>>>
>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>
>>>>>> HI,
>>>>>>
>>>>>> I have implemented export feature of ignite data using JDBC Interator
>>>>>>
>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>
>>>>>> while (rs.next()){
>>>>>> // do operations
>>>>>>
>>>>>> }
>>>>>>
>>>>>> and fetch size is 200.
>>>>>>
>>>>>> when i run export operation twice for 4 L records whole 6B is filled
>>>>>> up and never getting released.
>>>>>>
>>>>>> Initially i thought that operations transforting result set to file
>>>>>> causing the memory full. But not.
>>>>>>
>>>>>> I just did follwoing and still the memory is growing and not getting
>>>>>> released
>>>>>>
>>>>>> while (rs.next()){
>>>>>> // nothing
>>>>>> }
>>>>>>
>>>>>> num #instances #bytes class name
>>>>>> ----------------------------------------------
>>>>>> 1: 55072353 2408335272 [C
>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>> 3: 779006 746187792 [B
>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>>> nary.BinaryObjectImpl
>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>> 10: 12427 16517632 [I
>>>>>>
>>>>>>
>>>>>> Not sure why string objects are getting increased.
>>>>>>
>>>>>> Could you please help in understanding the issue ?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> ------------------------------
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>> ignite-client-tp13594p13626.html
>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>> To unsubscribe from Apache Ignite Users, click here.
>>> NAML
>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>> ------------------------------
>> View this message in context: Re: High heap on ignite client
>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Hi Alex,
test program repository - https://github.com/adasari/test-ignite-jdbc.git
please let us if you have any suggestions/questions. thanks.
Thanks
On 15 June 2017 at 10:58, Anil <an...@gmail.com> wrote:
> Sure. thanks
>
> On 14 June 2017 at 19:51, afedotov <al...@gmail.com> wrote:
>
>> Hi, Anil.
>>
>> Could you please share your full code (class/method) you are using to
>> read data.
>>
>> Kind regards,
>> Alex
>>
>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>
>>> Do you have any advice on implementing large records export from ignite ?
>>>
>>> I could not use ScanQuery right as my whole application built around
>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>
>>> Thanks
>>>
>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>
>>>> I understand from the code that there is no cursor from h2 db (or
>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>> reducer. It means when exporting large number of records, all data is in
>>>> memory.
>>>>
>>>> if (send(nodes,
>>>> oldStyle ?
>>>> new GridQueryRequest(qryReqId,
>>>> r.pageSize,
>>>> space,
>>>> mapQrys,
>>>> topVer,
>>>> extraSpaces(space, qry.spaces()),
>>>> null,
>>>> timeoutMillis) :
>>>> new GridH2QueryRequest()
>>>> .requestId(qryReqId)
>>>> .topologyVersion(topVer)
>>>> .pageSize(r.pageSize)
>>>> .caches(qry.caches())
>>>> .tables(distributedJoins ? qry.tables() :
>>>> null)
>>>> .partitions(convert(partsMap))
>>>> .queries(mapQrys)
>>>> .flags(flags)
>>>> .timeout(timeoutMillis),
>>>> oldStyle && partsMap != null ? new
>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>> false)) {
>>>>
>>>> awaitAllReplies(r, nodes, cancel);
>>>>
>>>> *// once the responses from all nodes for the query received.. proceed
>>>> further ?*
>>>>
>>>> if (!retry) {
>>>> if (skipMergeTbl) {
>>>> List<List<?>> res = new ArrayList<>();
>>>>
>>>> // Simple UNION ALL can have multiple indexes.
>>>> for (GridMergeIndex idx : r.idxs) {
>>>> Cursor cur = idx.findInStream(null, null);
>>>>
>>>> while (cur.next()) {
>>>> Row row = cur.get();
>>>>
>>>> int cols = row.getColumnCount();
>>>>
>>>> List<Object> resRow = new
>>>> ArrayList<>(cols);
>>>>
>>>> for (int c = 0; c < cols; c++)
>>>> resRow.add(row.getValue(c).get
>>>> Object());
>>>>
>>>> res.add(resRow);
>>>> }
>>>> }
>>>>
>>>> resIter = res.iterator();
>>>> }else {
>>>> // incase of split query scenario
>>>> }
>>>>
>>>> }
>>>>
>>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>> keepPortable);
>>>>
>>>>
>>>> Query cursor is iterator which does column value mapping per page. But
>>>> still all records of query are still in memory. correct?
>>>>
>>>> Please correct me if I am wrong. thanks.
>>>>
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>
>>>>>
>>>>> jvm parameters used -
>>>>>
>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>> /heapdump-client.hprof
>>>>>
>>>>> Thanks.
>>>>>
>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>
>>>>>> HI,
>>>>>>
>>>>>> I have implemented export feature of ignite data using JDBC Interator
>>>>>>
>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>
>>>>>> while (rs.next()){
>>>>>> // do operations
>>>>>>
>>>>>> }
>>>>>>
>>>>>> and fetch size is 200.
>>>>>>
>>>>>> when i run export operation twice for 4 L records whole 6B is filled
>>>>>> up and never getting released.
>>>>>>
>>>>>> Initially i thought that operations transforting result set to file
>>>>>> causing the memory full. But not.
>>>>>>
>>>>>> I just did follwoing and still the memory is growing and not getting
>>>>>> released
>>>>>>
>>>>>> while (rs.next()){
>>>>>> // nothing
>>>>>> }
>>>>>>
>>>>>> num #instances #bytes class name
>>>>>> ----------------------------------------------
>>>>>> 1: 55072353 2408335272 [C
>>>>>> 2: 54923606 1318166544 java.lang.String
>>>>>> 3: 779006 746187792 [B
>>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>>> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
>>>>>> 6: 4745694 113896656 java.lang.Long
>>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>>> nary.BinaryObjectImpl
>>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>>> 10: 12427 16517632 [I
>>>>>>
>>>>>>
>>>>>> Not sure why string objects are getting increased.
>>>>>>
>>>>>> Could you please help in understanding the issue ?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> ------------------------------
>>> If you reply to this email, your message will be added to the discussion
>>> below:
>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>> ignite-client-tp13594p13626.html
>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>> To unsubscribe from Apache Ignite Users, click here.
>>> NAML
>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>
>>
>> ------------------------------
>> View this message in context: Re: High heap on ignite client
>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>> Sent from the Apache Ignite Users mailing list archive
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>
>
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Sure. thanks
On 14 June 2017 at 19:51, afedotov <al...@gmail.com> wrote:
> Hi, Anil.
>
> Could you please share your full code (class/method) you are using to read
> data.
>
> Kind regards,
> Alex
>
> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>
>> Do you have any advice on implementing large records export from ignite ?
>>
>> I could not use ScanQuery right as my whole application built around Jdbc
>> driver and writing complex queries in scan query is very difficult.
>>
>> Thanks
>>
>> On 10 June 2017 at 18:48, Anil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>
>>> I understand from the code that there is no cursor from h2 db (or ignite
>>> embed h2 db) internally and all mapper response consolidated at reducer. It
>>> means when exporting large number of records, all data is in memory.
>>>
>>> if (send(nodes,
>>> oldStyle ?
>>> new GridQueryRequest(qryReqId,
>>> r.pageSize,
>>> space,
>>> mapQrys,
>>> topVer,
>>> extraSpaces(space, qry.spaces()),
>>> null,
>>> timeoutMillis) :
>>> new GridH2QueryRequest()
>>> .requestId(qryReqId)
>>> .topologyVersion(topVer)
>>> .pageSize(r.pageSize)
>>> .caches(qry.caches())
>>> .tables(distributedJoins ? qry.tables() :
>>> null)
>>> .partitions(convert(partsMap))
>>> .queries(mapQrys)
>>> .flags(flags)
>>> .timeout(timeoutMillis),
>>> oldStyle && partsMap != null ? new
>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>> false)) {
>>>
>>> awaitAllReplies(r, nodes, cancel);
>>>
>>> *// once the responses from all nodes for the query received.. proceed
>>> further ?*
>>>
>>> if (!retry) {
>>> if (skipMergeTbl) {
>>> List<List<?>> res = new ArrayList<>();
>>>
>>> // Simple UNION ALL can have multiple indexes.
>>> for (GridMergeIndex idx : r.idxs) {
>>> Cursor cur = idx.findInStream(null, null);
>>>
>>> while (cur.next()) {
>>> Row row = cur.get();
>>>
>>> int cols = row.getColumnCount();
>>>
>>> List<Object> resRow = new
>>> ArrayList<>(cols);
>>>
>>> for (int c = 0; c < cols; c++)
>>> resRow.add(row.getValue(c).get
>>> Object());
>>>
>>> res.add(resRow);
>>> }
>>> }
>>>
>>> resIter = res.iterator();
>>> }else {
>>> // incase of split query scenario
>>> }
>>>
>>> }
>>>
>>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>>> keepPortable);
>>>
>>>
>>> Query cursor is iterator which does column value mapping per page. But
>>> still all records of query are still in memory. correct?
>>>
>>> Please correct me if I am wrong. thanks.
>>>
>>>
>>> Thanks
>>>
>>>
>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>
>>>>
>>>> jvm parameters used -
>>>>
>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>> /heapdump-client.hprof
>>>>
>>>> Thanks.
>>>>
>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>
>>>>> HI,
>>>>>
>>>>> I have implemented export feature of ignite data using JDBC Interator
>>>>>
>>>>> ResultSet rs = statement.executeQuery();
>>>>>
>>>>> while (rs.next()){
>>>>> // do operations
>>>>>
>>>>> }
>>>>>
>>>>> and fetch size is 200.
>>>>>
>>>>> when i run export operation twice for 4 L records whole 6B is filled
>>>>> up and never getting released.
>>>>>
>>>>> Initially i thought that operations transforting result set to file
>>>>> causing the memory full. But not.
>>>>>
>>>>> I just did follwoing and still the memory is growing and not getting
>>>>> released
>>>>>
>>>>> while (rs.next()){
>>>>> // nothing
>>>>> }
>>>>>
>>>>> num #instances #bytes class name
>>>>> ----------------------------------------------
>>>>> 1: 55072353 2408335272 [C
>>>>> 2: 54923606 1318166544 java.lang.String
>>>>> 3: 779006 746187792 [B
>>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>>> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
>>>>> 6: 4745694 113896656 java.lang.Long
>>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>>> nary.BinaryObjectImpl
>>>>> 9: 895627 21495048 java.util.ArrayList
>>>>> 10: 12427 16517632 [I
>>>>>
>>>>>
>>>>> Not sure why string objects are getting increased.
>>>>>
>>>>> Could you please help in understanding the issue ?
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>>
>>>
>>
>>
>> ------------------------------
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>> ignite-client-tp13594p13626.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
> ------------------------------
> View this message in context: Re: High heap on ignite client
> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>
Re: High heap on ignite client
Posted by afedotov <al...@gmail.com>.
Hi, Anil.
Could you please share your full code (class/method) you are using to read
data.
Kind regards,
Alex
12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <
ml+s70518n13626h40@n6.nabble.com> написал:
> Do you have any advice on implementing large records export from ignite ?
>
> I could not use ScanQuery right as my whole application built around Jdbc
> driver and writing complex queries in scan query is very difficult.
>
> Thanks
>
> On 10 June 2017 at 18:48, Anil <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>
>> I understand from the code that there is no cursor from h2 db (or ignite
>> embed h2 db) internally and all mapper response consolidated at reducer. It
>> means when exporting large number of records, all data is in memory.
>>
>> if (send(nodes,
>> oldStyle ?
>> new GridQueryRequest(qryReqId,
>> r.pageSize,
>> space,
>> mapQrys,
>> topVer,
>> extraSpaces(space, qry.spaces()),
>> null,
>> timeoutMillis) :
>> new GridH2QueryRequest()
>> .requestId(qryReqId)
>> .topologyVersion(topVer)
>> .pageSize(r.pageSize)
>> .caches(qry.caches())
>> .tables(distributedJoins ? qry.tables() :
>> null)
>> .partitions(convert(partsMap))
>> .queries(mapQrys)
>> .flags(flags)
>> .timeout(timeoutMillis),
>> oldStyle && partsMap != null ? new
>> ExplicitPartitionsSpecializer(partsMap) : null,
>> false)) {
>>
>> awaitAllReplies(r, nodes, cancel);
>>
>> *// once the responses from all nodes for the query received.. proceed
>> further ?*
>>
>> if (!retry) {
>> if (skipMergeTbl) {
>> List<List<?>> res = new ArrayList<>();
>>
>> // Simple UNION ALL can have multiple indexes.
>> for (GridMergeIndex idx : r.idxs) {
>> Cursor cur = idx.findInStream(null, null);
>>
>> while (cur.next()) {
>> Row row = cur.get();
>>
>> int cols = row.getColumnCount();
>>
>> List<Object> resRow = new
>> ArrayList<>(cols);
>>
>> for (int c = 0; c < cols; c++)
>> resRow.add(row.getValue(c).get
>> Object());
>>
>> res.add(resRow);
>> }
>> }
>>
>> resIter = res.iterator();
>> }else {
>> // incase of split query scenario
>> }
>>
>> }
>>
>> return new GridQueryCacheObjectsIterator(resIter, cctx,
>> keepPortable);
>>
>>
>> Query cursor is iterator which does column value mapping per page. But
>> still all records of query are still in memory. correct?
>>
>> Please correct me if I am wrong. thanks.
>>
>>
>> Thanks
>>
>>
>> On 10 June 2017 at 15:53, Anil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>
>>>
>>> jvm parameters used -
>>>
>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>> /heapdump-client.hprof
>>>
>>> Thanks.
>>>
>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>
>>>> HI,
>>>>
>>>> I have implemented export feature of ignite data using JDBC Interator
>>>>
>>>> ResultSet rs = statement.executeQuery();
>>>>
>>>> while (rs.next()){
>>>> // do operations
>>>>
>>>> }
>>>>
>>>> and fetch size is 200.
>>>>
>>>> when i run export operation twice for 4 L records whole 6B is filled up
>>>> and never getting released.
>>>>
>>>> Initially i thought that operations transforting result set to file
>>>> causing the memory full. But not.
>>>>
>>>> I just did follwoing and still the memory is growing and not getting
>>>> released
>>>>
>>>> while (rs.next()){
>>>> // nothing
>>>> }
>>>>
>>>> num #instances #bytes class name
>>>> ----------------------------------------------
>>>> 1: 55072353 2408335272 [C
>>>> 2: 54923606 1318166544 java.lang.String
>>>> 3: 779006 746187792 [B
>>>> 4: 903548 304746304 [Ljava.lang.Object;
>>>> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
>>>> 6: 4745694 113896656 java.lang.Long
>>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>>> nary.BinaryObjectImpl
>>>> 9: 895627 21495048 java.util.ArrayList
>>>> 10: 12427 16517632 [I
>>>>
>>>>
>>>> Not sure why string objects are getting increased.
>>>>
>>>> Could you please help in understanding the issue ?
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>
>
>
> ------------------------------
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-
> tp13594p13626.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1h65@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=1&code=YWxleGFuZGVyLmZlZG90b2ZmQGdtYWlsLmNvbXwxfC0xMzYxNTU0NTg=>
> .
> NAML
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
Do you have any advice on implementing large records export from ignite ?
I could not use ScanQuery right as my whole application built around Jdbc
driver and writing complex queries in scan query is very difficult.
Thanks
On 10 June 2017 at 18:48, Anil <an...@gmail.com> wrote:
> I understand from the code that there is no cursor from h2 db (or ignite
> embed h2 db) internally and all mapper response consolidated at reducer. It
> means when exporting large number of records, all data is in memory.
>
> if (send(nodes,
> oldStyle ?
> new GridQueryRequest(qryReqId,
> r.pageSize,
> space,
> mapQrys,
> topVer,
> extraSpaces(space, qry.spaces()),
> null,
> timeoutMillis) :
> new GridH2QueryRequest()
> .requestId(qryReqId)
> .topologyVersion(topVer)
> .pageSize(r.pageSize)
> .caches(qry.caches())
> .tables(distributedJoins ? qry.tables() : null)
> .partitions(convert(partsMap))
> .queries(mapQrys)
> .flags(flags)
> .timeout(timeoutMillis),
> oldStyle && partsMap != null ? new
> ExplicitPartitionsSpecializer(partsMap) : null,
> false)) {
>
> awaitAllReplies(r, nodes, cancel);
>
> *// once the responses from all nodes for the query received.. proceed
> further ?*
>
> if (!retry) {
> if (skipMergeTbl) {
> List<List<?>> res = new ArrayList<>();
>
> // Simple UNION ALL can have multiple indexes.
> for (GridMergeIndex idx : r.idxs) {
> Cursor cur = idx.findInStream(null, null);
>
> while (cur.next()) {
> Row row = cur.get();
>
> int cols = row.getColumnCount();
>
> List<Object> resRow = new
> ArrayList<>(cols);
>
> for (int c = 0; c < cols; c++)
> resRow.add(row.getValue(c).
> getObject());
>
> res.add(resRow);
> }
> }
>
> resIter = res.iterator();
> }else {
> // incase of split query scenario
> }
>
> }
>
> return new GridQueryCacheObjectsIterator(resIter, cctx,
> keepPortable);
>
>
> Query cursor is iterator which does column value mapping per page. But
> still all records of query are still in memory. correct?
>
> Please correct me if I am wrong. thanks.
>
>
> Thanks
>
>
> On 10 June 2017 at 15:53, Anil <an...@gmail.com> wrote:
>
>>
>> jvm parameters used -
>>
>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>>
>> Thanks.
>>
>> On 10 June 2017 at 15:06, Anil <an...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> I have implemented export feature of ignite data using JDBC Interator
>>>
>>> ResultSet rs = statement.executeQuery();
>>>
>>> while (rs.next()){
>>> // do operations
>>>
>>> }
>>>
>>> and fetch size is 200.
>>>
>>> when i run export operation twice for 4 L records whole 6B is filled up
>>> and never getting released.
>>>
>>> Initially i thought that operations transforting result set to file
>>> causing the memory full. But not.
>>>
>>> I just did follwoing and still the memory is growing and not getting
>>> released
>>>
>>> while (rs.next()){
>>> // nothing
>>> }
>>>
>>> num #instances #bytes class name
>>> ----------------------------------------------
>>> 1: 55072353 2408335272 [C
>>> 2: 54923606 1318166544 java.lang.String
>>> 3: 779006 746187792 [B
>>> 4: 903548 304746304 [Ljava.lang.Object;
>>> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
>>> 6: 4745694 113896656 java.lang.Long
>>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>>> 8: 773348 30933920 org.apache.ignite.internal.bi
>>> nary.BinaryObjectImpl
>>> 9: 895627 21495048 java.util.ArrayList
>>> 10: 12427 16517632 [I
>>>
>>>
>>> Not sure why string objects are getting increased.
>>>
>>> Could you please help in understanding the issue ?
>>>
>>> Thanks
>>>
>>
>>
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
I understand from the code that there is no cursor from h2 db (or ignite
embed h2 db) internally and all mapper response consolidated at reducer. It
means when exporting large number of records, all data is in memory.
if (send(nodes,
oldStyle ?
new GridQueryRequest(qryReqId,
r.pageSize,
space,
mapQrys,
topVer,
extraSpaces(space, qry.spaces()),
null,
timeoutMillis) :
new GridH2QueryRequest()
.requestId(qryReqId)
.topologyVersion(topVer)
.pageSize(r.pageSize)
.caches(qry.caches())
.tables(distributedJoins ? qry.tables() : null)
.partitions(convert(partsMap))
.queries(mapQrys)
.flags(flags)
.timeout(timeoutMillis),
oldStyle && partsMap != null ? new
ExplicitPartitionsSpecializer(partsMap) : null,
false)) {
awaitAllReplies(r, nodes, cancel);
*// once the responses from all nodes for the query received.. proceed
further ?*
if (!retry) {
if (skipMergeTbl) {
List<List<?>> res = new ArrayList<>();
// Simple UNION ALL can have multiple indexes.
for (GridMergeIndex idx : r.idxs) {
Cursor cur = idx.findInStream(null, null);
while (cur.next()) {
Row row = cur.get();
int cols = row.getColumnCount();
List<Object> resRow = new ArrayList<>(cols);
for (int c = 0; c < cols; c++)
resRow.add(row.getValue(c).getObject());
res.add(resRow);
}
}
resIter = res.iterator();
}else {
// incase of split query scenario
}
}
return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);
Query cursor is iterator which does column value mapping per page. But
still all records of query are still in memory. correct?
Please correct me if I am wrong. thanks.
Thanks
On 10 June 2017 at 15:53, Anil <an...@gmail.com> wrote:
>
> jvm parameters used -
>
> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>
> Thanks.
>
> On 10 June 2017 at 15:06, Anil <an...@gmail.com> wrote:
>
>> HI,
>>
>> I have implemented export feature of ignite data using JDBC Interator
>>
>> ResultSet rs = statement.executeQuery();
>>
>> while (rs.next()){
>> // do operations
>>
>> }
>>
>> and fetch size is 200.
>>
>> when i run export operation twice for 4 L records whole 6B is filled up
>> and never getting released.
>>
>> Initially i thought that operations transforting result set to file
>> causing the memory full. But not.
>>
>> I just did follwoing and still the memory is growing and not getting
>> released
>>
>> while (rs.next()){
>> // nothing
>> }
>>
>> num #instances #bytes class name
>> ----------------------------------------------
>> 1: 55072353 2408335272 [C
>> 2: 54923606 1318166544 java.lang.String
>> 3: 779006 746187792 [B
>> 4: 903548 304746304 [Ljava.lang.Object;
>> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
>> 6: 4745694 113896656 java.lang.Long
>> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
>> 8: 773348 30933920 org.apache.ignite.internal.bi
>> nary.BinaryObjectImpl
>> 9: 895627 21495048 java.util.ArrayList
>> 10: 12427 16517632 [I
>>
>>
>> Not sure why string objects are getting increased.
>>
>> Could you please help in understanding the issue ?
>>
>> Thanks
>>
>
>
Re: High heap on ignite client
Posted by Anil <an...@gmail.com>.
jvm parameters used -
-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
-XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
-Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
-XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
Thanks.
On 10 June 2017 at 15:06, Anil <an...@gmail.com> wrote:
> HI,
>
> I have implemented export feature of ignite data using JDBC Interator
>
> ResultSet rs = statement.executeQuery();
>
> while (rs.next()){
> // do operations
>
> }
>
> and fetch size is 200.
>
> when i run export operation twice for 4 L records whole 6B is filled up
> and never getting released.
>
> Initially i thought that operations transforting result set to file
> causing the memory full. But not.
>
> I just did follwoing and still the memory is growing and not getting
> released
>
> while (rs.next()){
> // nothing
> }
>
> num #instances #bytes class name
> ----------------------------------------------
> 1: 55072353 2408335272 [C
> 2: 54923606 1318166544 java.lang.String
> 3: 779006 746187792 [B
> 4: 903548 304746304 [Ljava.lang.Object;
> 5: 773348 259844928 net.juniper.cs.entity.InstallBase
> 6: 4745694 113896656 java.lang.Long
> 7: 1111692 44467680 sun.nio.cs.UTF_8$Decoder
> 8: 773348 30933920 org.apache.ignite.internal.
> binary.BinaryObjectImpl
> 9: 895627 21495048 java.util.ArrayList
> 10: 12427 16517632 [I
>
>
> Not sure why string objects are getting increased.
>
> Could you please help in understanding the issue ?
>
> Thanks
>