You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Vladimir Ozerov <vo...@gridgain.com> on 2016/03/15 08:53:15 UTC

Re: about mr accelerator question.

Hi,

1) If you have 200G of data, and all this data is used, 6 nodes with 24G
each will not be able to hold it. Possible solutions:
- Allocate more memory. Note that having 24G on-heap and 24G off-heap,
doesn't mean that you have 48G for IGFS. IGFS store data in either onheap or
offheap depending on data cache configuration, but not in both. 
- If you cannot allocate more memory, you can configure
IgfsPerBlockLruEvictionPolicy which will evict some blocks from memory if
memory consumption is too high and pull them from secondary file system
again when needed. It might affect performance, but will prevent
out-of-memory.
- Also please note that Hadoop Accelerator require bigger amount of
permgen/metaspace than normal application.

We can dig further if you provide the following information:
- XML configuration you use to start node.
- Exact reason of out-of-memory error.

2) About HDFS startup - when running HDFS as a secondary file system, it is
better not to change default file system in the main code-site.xml because
Hadoop expects it to be of "hdfs" type. Instead, you can specify only IGFS
classes in this file, and access IGFS using fully-qualified paths. E.g.
"igfs:///path/to/file" instead of "/path/to/file". 
Alternatively, you can create separate configuration file with default file
system set to IGFS and then specify it when starting Hadoop. E.g.:
hadoop --config [folder_with_your_config] ...



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3509.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Re: about mr accelerator question.

Posted by Vladimir Ozerov <vo...@gridgain.com>.
Currently client part of Hadoop Accelerator connects to a single Ignite
node. If you kill this node, client will no longer be able to track job
progress and will not be notified about job completion. From attached logs
it seems to me that you killed the node client had been connected to. Is
this the case?

BTW, we are planning to implement high-availability soon.

Vladimir.

On Wed, Mar 30, 2016 at 1:44 PM, liym@runstone.com <li...@runstone.com>
wrote:

> *then i find some warn from log,so i change the default_config.xml*
>
>
> [17:31:23,540][WARN ][grid-nio-worker-2-#102%null%][TcpCommunicationSpi] Communication SPI Session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=rslog5-tj/
> 202.99.69.174:47100, writeTimeout=2000]
>
>
>  <property name="communicationSpi">
>
>     <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>       <!-- Override local port. -->
>       <property name="socketWriteTimeout" value="60000"/>
>     </bean>
>  </property>
>
> *but appear another error:*
>
> Mar 30, 2016 6:14:40 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
> INFO: Client TCP connection closed: /202.99.69.174:11211
>
> Exception in thread "main" java.io.IOException: Job tracker doesn't have any information about the job: job_05559fd1-37aa-4a52-aa38-02adf020972f_0001
>
> at org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:186)
> at org.apache.hadoop.mapreduce.Job$1.run(Job.java:325)
> at org.apache.hadoop.mapreduce.Job$1.run(Job.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
> at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:610)
> at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1355)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1317)
> at mapreduce.DomainsSecondPVByIPMR.main(DomainsSecondPVByIPMR.java:73)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> *from the log find another warn:*
>
>
> 18:10:28,458][INFO ][Hadoop-task-05559fd1-37aa-4a52-aa38-02adf020972f_1-MAP-323-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>
>
> *[18:10:29,470][WARN ][grid-nio-worker-0-#100%null%][TcpCommunicationSpi] Failed to process selector key (will close): GridSelectorNioSessionImpl [selectorIdx=0, queueSize=217, writeBuf=java.nio.DirectByteBuffer[pos=12496 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], recovery=GridNioRecoveryDescriptor [acked=64416, resendCnt=0, rcvCnt=64715, reserved=true, lastAck=64704, nodeLeft=false, node=TcpDiscoveryNode [id=1a33b0e1-1627-4908-aa98-86d7fe19a8c5, addrs=[127.0.0.1, 202.99.96.170], sockAddrs=[rslog1-tj/202.99.96.170:47500
> <http://202.99.96.170:47500>, /127.0.0.1:47500
> <http://127.0.0.1:47500>, /202.99.96.170:47500
> <http://202.99.96.170:47500>], discPort=47500, order=1, intOrder=1, lastExchangeTime=1459331261883, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], connected=true, connectCnt=0, queueLimit=5120], super=GridNioSessionImpl [locAddr=/202.99.69.170:47100
> <http://202.99.69.170:47100>, rmtAddr=/202.99.96.170:37587
> <http://202.99.96.170:37587>, createTime=1459331262015, closeTime=0, bytesSent=3934874712, bytesRcvd=4478411704, sndSchedTime=1459332629400, lastSndTime=1459332621394, lastRcvTime=1459332620807, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=o.a.i.i.util.nio.GridDirectParser@55fa1f31, directMode=true], GridConnectionBytesVerifyFilter], accepted=true]][18:10:29,503][WARN ][grid-nio-worker-0-#100%null%][TcpCommunicationSpi] Closing NIO session because of unhandled exception [cls=class o.a.i.i.util.nio.GridNioException, msg=Connection reset by peer][18:10:29,541][WARN ][disco-event-worker-#113%null%][GridDiscoveryManager] Node FAILED: TcpDiscoveryNode [id=1a33b0e1-1627-4908-aa98-86d7fe19a8c5, addrs=[127.0.0.1, 202.99.96.170], sockAddrs=[rslog1-tj/202.99.96.170:47500
> <http://202.99.96.170:47500>, /127.0.0.1:47500
> <http://127.0.0.1:47500>, /202.99.96.170:47500
> <http://202.99.96.170:47500>], discPort=47500, order=1, intOrder=1, lastExchangeTime=1459331261883, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false]*
>
> [18:10:29,543][INFO ][disco-event-worker-#113%null%][GridDiscoveryManager] Topology snapshot [ver=8, servers=5, clients=0, CPUs=120, heap=160.0GB]
>
> [18:10:30,206][INFO ][Hadoop-task-05559fd1-37aa-4a52-aa38-02adf020972f_1-MAP-331-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:10:32,690][INFO ][Hadoop-task-05559fd1-37aa-4a52-aa38-02adf020972f_1-MAP-324-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:10:33,071][INFO ][exchange-worker-#115%null%][GridCachePartitionExchangeManager] Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=8, minorTopVer=0], evt=NODE_FAILED, node=1a33b0e1-1627-4908-aa98-86d7fe19a8c5]
>
>
>
>
>
> *From:* liym@runstone.com
> *Date:* 2016-03-30 17:36
> *To:* user <us...@ignite.apache.org>
> *Subject:* Re: Re: about mr accelerator question.
> I am so sorry that the description is not clearly.
> in the error node, there is a exception
>
>
> Mar 30, 2016 5:11:58 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
> INFO: Client TCP connection closed: /202.99.69.178:11211
>
> *Exception in thread "main" Mar 30, 2016 5:11:58 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close*
> INFO: Client TCP connection closed: /202.99.96.178:11211
>
> java.io.IOException: Failed to get job status: job_c1de7618-b0f1-4159-ade4-57e305d4667f_0001
>
> at org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:191)
> at org.apache.hadoop.mapreduce.Job$1.run(Job.java:325)
> at org.apache.hadoop.mapreduce.Job$1.run(Job.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
>
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
> at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:610)
> at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1356)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1317)
> at mapreduce.DomainsSecondPVByIPMR.main(DomainsSecondPVByIPMR.java:73)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
>
> Caused by: class org.apache.ignite.internal.client.impl.connection.GridClientConnectionResetException: Failed to perform request (connection failed): /
> 202.99.96.178:11211
>
> at org.apache.ignite.internal.client.impl.connection.GridClientConnection.getCloseReasonAsException(GridClientConnection.java:491)
>
> at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:336)
>
> at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:296)
>
> at org.apache.ignite.internal.client.impl.connection.GridClientConnectionManagerAdapter$NioListener.onDisconnected(GridClientConnectionManagerAdapter.java:605)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionClosed(GridNioFilterChain.java:249)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
>
> at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionClosed(GridNioCodecFilter.java:70)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionClosed(GridNioServer.java:2115)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionClosed(GridNioFilterChain.java:147)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:1659)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:731)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeys(GridNioServer.java:1463)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1398)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1280)
>
> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
>
>
> *From:* Vladimir Ozerov <vo...@gridgain.com>
> *Date:* 2016-03-29 19:53
> *To:* user <us...@ignite.apache.org>
> *Subject:* Re: Re: about mr accelerator question.
> Hi,
>
> Sorry, still do not understand the question well. Do you need to
> understand why the node was killed? Or something wrong happened to a
> cluster after the node had been killed?
>
> Vladimir.
>
> On Tue, Mar 29, 2016 at 4:50 AM, liym@runstone.com <li...@runstone.com>
> wrote:
>
>> One of nodes process is killed auto when excute the mr task, so other
>> nodes could not send the message to the node which is killed.
>>
>> [17:42:52] Security status [authentication=off, tls/ssl=off]
>> [17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
>> [17:42:55] Performance suggestions for grid  (fix if possible)
>> [17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>>
>> [17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
>> [17:42:55]
>>
>> [17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
>> [17:42:55]
>> [17:42:55] Ignite node started OK (id=7965370b)
>>
>> [17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
>>
>> [17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
>>
>> [17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
>>
>> [17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
>>
>> [17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
>>
>> [17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
>>
>> [17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> *./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"*
>> *hduser@rslog1-tj:~/ignite/bin$*
>>
>> ------------------------------
>>
>>
>> *From:* Vladimir Ozerov <vo...@gridgain.com>
>> *Date:* 2016-03-28 18:57
>> *To:* user <us...@ignite.apache.org>
>> *Subject:* Re: Re: about mr accelerator question.
>> Hi,
>>
>> I'm not sure I understand what error do you mean. At least, I do not see
>> any exceptions in the log. Could you please clarify?
>>
>> Vladimir.
>>
>> On Mon, Mar 28, 2016 at 1:30 PM, liym@runstone.com <li...@runstone.com>
>> wrote:
>>
>>> There is a question .now i have 6 ignite nodes.there is a error when the
>>> mr task is running.one node is killed usually,can you tell me why.thanks a
>>> lot.
>>> on the one node or two node,I dont find this error.
>>>
>>> [17:42:52] Security status [authentication=off, tls/ssl=off]
>>> [17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
>>> [17:42:55] Performance suggestions for grid  (fix if possible)
>>> [17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>>>
>>> [17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
>>> [17:42:55]
>>>
>>> [17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
>>> [17:42:55]
>>> [17:42:55] Ignite node started OK (id=7965370b)
>>>
>>> [17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
>>>
>>> [17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
>>>
>>> [17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
>>>
>>> [17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
>>>
>>> [17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
>>>
>>> [17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
>>>
>>> [17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>>
>>> [17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>>
>>> [17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>>
>>> [17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>>
>>> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>>
>>> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>>
>>> ./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>>> hduser@rslog1-tj:~/ignite/bin$
>>>
>>> *all nodes have the same config*
>>> <?xml version="1.0" encoding="UTF-8"?>
>>>
>>> <!--
>>>   Licensed to the Apache Software Foundation (ASF) under one or more
>>>   contributor license agreements.  See the NOTICE file distributed with
>>>   this work for additional information regarding copyright ownership.
>>>   The ASF licenses this file to You under the Apache License, Version 2.0
>>>   (the "License"); you may not use this file except in compliance with
>>>   the License.  You may obtain a copy of the License at
>>>
>>>        http://www.apache.org/licenses/LICENSE-2.0
>>>
>>>   Unless required by applicable law or agreed to in writing, software
>>>   distributed under the License is distributed on an "AS IS" BASIS,
>>>
>>>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>>   See the License for the specific language governing permissions and
>>>   limitations under the License.
>>> -->
>>>
>>> <!--
>>>     Ignite Spring configuration file.
>>>
>>>
>>>     When starting a standalone Ignite node, you need to execute the following command:
>>>
>>>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>>>
>>>
>>>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>>>     Ignition.start("path-to-this-file/default-config.xml");
>>> -->
>>> <beans xmlns="http://www.springframework.org/schema/beans"
>>>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance
>>> " xmlns:util="http://www.springframework.org/schema/util"
>>>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>>>        http://www.springframework.org/schema/beans/spring-beans.xsd
>>>        http://www.springframework.org/schema/util
>>>        http://www.springframework.org/schema/util/spring-util.xsd">
>>>
>>>     <!--
>>>         Optional description.
>>>     -->
>>>     <description>
>>>
>>>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>>>         Ignite node will start with this configuration by default.
>>>     </description>
>>>
>>>     <!--
>>>
>>>         Initialize property configurer so we can reference environment variables.
>>>     -->
>>>
>>>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>>
>>>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>>>         <property name="searchSystemEnvironment" value="true"/>
>>>     </bean>
>>>
>>>     <!--
>>>         Abstract IGFS file system configuration to be used as a template.
>>>     -->
>>>
>>>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>>>         <!-- Must correlate with cache affinity mapper. -->
>>>         <property name="blockSize" value="#{128 * 1024}"/>
>>>         <property name="perNodeBatchSize" value="512"/>
>>>         <property name="perNodeParallelBatchCount" value="16"/>
>>>
>>>         <property name="prefetchBlocks" value="32"/>
>>>     </bean>
>>>
>>>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>>>   <!-- Store cache entries on-heap. -->
>>>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>>>
>>>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>>>
>>>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>>>   <!-- Configure eviction policy. -->
>>>   <property name="evictionPolicy">
>>>
>>>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>>>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>>>       <property name="maxSize" value="3400000"/>
>>>
>>>     </bean>
>>>   </property>
>>>   </bean>
>>>
>>>     <!--
>>>
>>>         Abstract cache configuration for IGFS file data to be used as a template.
>>>     -->
>>>
>>>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>>         <property name="cacheMode" value="PARTITIONED"/>
>>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>>         <property name="backups" value="0"/>
>>>         <property name="affinityMapper">
>>>
>>>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>>>
>>>                 <!-- How many sequential blocks will be stored on the same node. -->
>>>                 <constructor-arg value="512"/>
>>>             </bean>
>>>         </property>
>>>     </bean>
>>>
>>>     <!--
>>>
>>>         Abstract cache configuration for IGFS metadata to be used as a template.
>>>     -->
>>>
>>>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>>         <property name="cacheMode" value="REPLICATED"/>
>>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>>     </bean>
>>>
>>>     <!--
>>>         Configuration of Ignite node.
>>>     -->
>>>
>>>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>>>         <!--
>>>             Apache Hadoop Accelerator configuration.
>>>         -->
>>>         <property name="hadoopConfiguration">
>>>
>>>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>>>
>>>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>>>                 <property name="finishedJobInfoTtl" value="300000"/>
>>>
>>>             </bean>
>>>         </property>
>>>
>>>         <!--
>>>
>>>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>>>         -->
>>>         <property name="connectorConfiguration">
>>>
>>>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>>>                 <property name="port" value="11211"/>
>>>             </bean>
>>>         </property>
>>>
>>>         <!--
>>>
>>>             Configure one IGFS file system instance named "igfs" on this node.
>>>         -->
>>>         <property name="fileSystemConfiguration">
>>>             <list>
>>>
>>>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>>>                     <property name="name" value="igfs"/>
>>>
>>>                     <!-- Caches with these names must be configured. -->
>>>                     <property name="metaCacheName" value="igfs-meta"/>
>>>                     <property name="dataCacheName" value="igfs-data"/>
>>>
>>>
>>>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>>>                     <property name="ipcEndpointConfiguration">
>>>
>>>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>>>                             <property name="type" value="TCP" />
>>>                             <property name="host" value="0.0.0.0" />
>>>                             <property name="port" value="10500" />
>>>                         </bean>
>>>                     </property>
>>>
>>>                     <!-- Sample secondary file system configuration.
>>>
>>>                         'uri'      - the URI of the secondary file system.
>>>
>>>                         'cfgPath'  - optional configuration path of the secondary file system,
>>>
>>>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>>>
>>>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>>>
>>>                             if Hadoop client and the Ignite node are running on behalf of different users.
>>>                     -->
>>>                     <property name="secondaryFileSystem">
>>>
>>>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>>>                             <constructor-arg name="uri" value="hdfs://
>>> 202.99.96.170:9000"/>
>>>
>>>
>>>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>>>
>>>                             <constructor-arg name="userName" value="client-user-name"/>
>>>                         </bean>
>>>                     </property>
>>>                 </bean>
>>>             </list>
>>>         </property>
>>>
>>>         <!--
>>>             Caches needed by IGFS.
>>>         -->
>>>         <property name="cacheConfiguration">
>>>             <list>
>>>                 <!-- File system metadata cache. -->
>>>
>>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>>>                     <property name="name" value="igfs-meta"/>
>>>                 </bean>
>>>
>>>                 <!-- File system files data cache. -->
>>>
>>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>>>                     <property name="name" value="igfs-data"/>
>>>                 </bean>
>>>             </list>
>>>         </property>
>>>
>>>         <!--
>>>             Disable events.
>>>         -->
>>>         <property name="includeEventTypes">
>>>             <list>
>>>
>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>>>
>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>>>
>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>>>             </list>
>>>         </property>
>>>
>>>         <!--
>>>
>>>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>>>         -->
>>>         <property name="discoverySpi">
>>>
>>>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>>                 <property name="ipFinder">
>>>
>>>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>>                         <property name="addresses">
>>>                             <list>
>>>                                 <value>202.99.96.170</value>
>>>                                 <value>202.99.69.170:47500
>>> ..47509</value>
>>>                                 <value>202.99.96.174:47500
>>> ..47509</value>
>>>                                 <value>202.99.96.178:47500
>>> ..47509</value>
>>>                                 <value>202.99.69.174:47500
>>> ..47509</value>
>>>                                 <value>202.99.69.178:47500
>>> ..47509</value>
>>>                             </list>
>>>                         </property>
>>>                     </bean>
>>>                 </property>
>>>             </bean>
>>>         </property>
>>>     </bean>
>>> </beans>
>>>
>>> *and the ignite.sh config is *
>>> #!/bin/bash
>>> #
>>> # Licensed to the Apache Software Foundation (ASF) under one or more
>>> # contributor license agreements.  See the NOTICE file distributed with
>>> # this work for additional information regarding copyright ownership.
>>> # The ASF licenses this file to You under the Apache License, Version 2.0
>>> # (the "License"); you may not use this file except in compliance with
>>> # the License.  You may obtain a copy of the License at
>>> #
>>> #      http://www.apache.org/licenses/LICENSE-2.0
>>> #
>>> # Unless required by applicable law or agreed to in writing, software
>>> # distributed under the License is distributed on an "AS IS" BASIS,
>>>
>>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>> # See the License for the specific language governing permissions and
>>> # limitations under the License.
>>> #
>>>
>>> #
>>> # Grid command line loader.
>>> #
>>>
>>> #
>>> # Import common functions.
>>> #
>>> if [ "${IGNITE_HOME}" = "" ];
>>>     then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
>>>     else IGNITE_HOME_TMP=${IGNITE_HOME};
>>> fi
>>>
>>> #
>>> # Set SCRIPTS_HOME - base path to scripts.
>>> #
>>> SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"
>>>
>>> source "${SCRIPTS_HOME}"/include/functions.sh
>>>
>>> #
>>> # Discover path to Java executable and check it's version.
>>> #
>>> checkJava
>>>
>>> #
>>> # Discover IGNITE_HOME environment variable.
>>> #
>>> setIgniteHome
>>>
>>> if [ "${DEFAULT_CONFIG}" == "" ]; then
>>>     DEFAULT_CONFIG=config/default-config.xml
>>> fi
>>>
>>> #
>>> # Parse command line parameters.
>>> #
>>> . "${SCRIPTS_HOME}"/include/parseargs.sh
>>>
>>> #
>>> # Set IGNITE_LIBS.
>>> #
>>> . "${SCRIPTS_HOME}"/include/setenv.sh
>>>
>>> CP="${IGNITE_LIBS}"
>>>
>>>
>>> RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)
>>>
>>>
>>> RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
>>> RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"
>>>
>>> #
>>> # Find available port for JMX
>>> #
>>>
>>> # You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
>>> #
>>> # This is executed when -nojmx is not specified
>>> #
>>> if [ "${NOJMX}" == "0" ] ; then
>>>     findAvailableJmxPort
>>> fi
>>>
>>> # Mac OS specific support to display correct name in the dock.
>>> osname=`uname`
>>>
>>> if [ "${DOCK_OPTS}" == "" ]; then
>>>     DOCK_OPTS="-Xdock:name=Ignite Node"
>>> fi
>>>
>>> #
>>> # JVM options. See
>>> http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
>>>  for more details.
>>> #
>>> # ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
>>> #
>>> if [ -z "$JVM_OPTS" ] ; then
>>>
>>>     JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
>>> fi
>>>
>>> #
>>>
>>> # Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
>>> #
>>>
>>> # JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
>>>
>>> # JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"
>>>
>>> #
>>> # Uncomment if you get StackOverflowError.
>>> # On 64 bit systems this value can be larger, e.g. -Xss16m
>>> #
>>> # JVM_OPTS="${JVM_OPTS} -Xss4m"
>>>
>>> #
>>> # Uncomment to set preference for IPv4 stack.
>>> #
>>> # JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"
>>>
>>> #
>>> # Assertions are disabled by default since version 3.5.
>>> # If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
>>> #
>>> ENABLE_ASSERTIONS="0"
>>>
>>> #
>>> # Set '-ea' options if assertions are enabled.
>>> #
>>> if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
>>>     JVM_OPTS="${JVM_OPTS} -ea"
>>> fi
>>>
>>> #
>>> # Set main class to start service (grid node by default).
>>> #
>>> if [ "${MAIN_CLASS}" = "" ]; then
>>>     MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
>>> fi
>>>
>>> #
>>> # Remote debugging (JPDA).
>>> # Uncomment and change if remote debugging is required.
>>> #
>>>
>>> # JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"
>>>
>>> ERRORCODE="-1"
>>>
>>> while [ "${ERRORCODE}" -ne "130" ]
>>> do
>>>     if [ "${INTERACTIVE}" == "1" ] ; then
>>>         case $osname in
>>>             Darwin*)
>>>
>>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>>                  -DIGNITE_HOME="${IGNITE_HOME}" \
>>>
>>>                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
>>>             ;;
>>>             *)
>>>
>>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>>                  -DIGNITE_HOME="${IGNITE_HOME}" \
>>>
>>>                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
>>>             ;;
>>>         esac
>>>     else
>>>         case $osname in
>>>             Darwin*)
>>>
>>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>>                   -DIGNITE_HOME="${IGNITE_HOME}" \
>>>
>>>                  -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>>>             ;;
>>>             *)
>>>
>>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>>                   -DIGNITE_HOME="${IGNITE_HOME}" \
>>>
>>>                  -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>>>             ;;
>>>         esac
>>>     fi
>>>
>>>     ERRORCODE="$?"
>>>
>>>     if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
>>>         break
>>>     else
>>>         rm -f "${RESTART_SUCCESS_FILE}"
>>>     fi
>>> done
>>>
>>> if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
>>>     rm -f "${RESTART_SUCCESS_FILE}"
>>> fi
>>>
>>> *and the log info , it looks like normal.*
>>>
>>> [18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>> [18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=7965370b, name=null]
>>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>>     ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
>>>     ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
>>>     ^-- Public thread pool [active=9, idle=39, qSize=0]
>>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>>     ^-- Outbound messages queue [size=0]
>>>
>>> [18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>> [18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=7965370b, name=null]
>>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>>     ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
>>>     ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
>>>     ^-- Public thread pool [active=5, idle=43, qSize=0]
>>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>>     ^-- Outbound messages queue [size=0]
>>>
>>> [18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>> [18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=7965370b, name=null]
>>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>>     ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
>>>     ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
>>>     ^-- Public thread pool [active=2, idle=46, qSize=0]
>>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>>     ^-- Outbound messages queue [size=0]
>>>
>>> [18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>> [18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=7965370b, name=null]
>>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>>     ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
>>>     ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
>>>     ^-- Public thread pool [active=5, idle=43, qSize=0]
>>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>>     ^-- Outbound messages queue [size=0]
>>>
>>> [18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>> [18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>>     ^-- Node [id=7965370b, name=null]
>>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>>     ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
>>>     ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
>>>     ^-- Public thread pool [active=3, idle=45, qSize=0]
>>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>>     ^-- Outbound messages queue [size=0]
>>>
>>> [18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> [18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>>
>>> ------------------------------
>>>
>>>
>>> *From:* Vladimir Ozerov <vo...@gridgain.com>
>>> *Date:* 2016-03-24 21:00
>>> *To:* user <us...@ignite.apache.org>
>>> *Subject:* Re: Re: about mr accelerator question.
>>> Hi,
>>>
>>> Possible speedup greatly depends on the nature of your task. Typically,
>>> the more MR tasks you have and the more intensively you work with actual
>>> data, the bigger improvement could be achieved. Please give more details on
>>> what kind of jobs do you run and probably I will be able to suggest
>>> something.
>>>
>>> One possible change you can make to your config - switch temporal file
>>> system paths used by your jobs to PRIMARY mode. This way all temp data will
>>> reside only in memory and will not hit HDFS.
>>>
>>> Vladimir.
>>>
>>> On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com>
>>> wrote:
>>>
>>>> I am so glad to tell you the problem has been solved,thanks a lot. but
>>>> the peformance improve only 300%, is there other good idea for config?
>>>>
>>>> there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been
>>>> finished.is there good suggestion.
>>>>
>>>> the ignite config is
>>>>
>>>> <?xml version="1.0" encoding="UTF-8"?>
>>>>
>>>> <!--
>>>>   Licensed to the Apache Software Foundation (ASF) under one or more
>>>>   contributor license agreements.  See the NOTICE file distributed with
>>>>   this work for additional information regarding copyright ownership.
>>>>
>>>>   The ASF licenses this file to You under the Apache License, Version 2.0
>>>>   (the "License"); you may not use this file except in compliance with
>>>>   the License.  You may obtain a copy of the License at
>>>>
>>>>        http://www.apache.org/licenses/LICENSE-2.0
>>>>
>>>>   Unless required by applicable law or agreed to in writing, software
>>>>   distributed under the License is distributed on an "AS IS" BASIS,
>>>>
>>>>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>>>   See the License for the specific language governing permissions and
>>>>   limitations under the License.
>>>> -->
>>>>
>>>> <!--
>>>>     Ignite Spring configuration file.
>>>>
>>>>
>>>>     When starting a standalone Ignite node, you need to execute the following command:
>>>>
>>>>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>>>>
>>>>
>>>>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>>>>     Ignition.start("path-to-this-file/default-config.xml");
>>>> -->
>>>> <beans xmlns="http://www.springframework.org/schema/beans"
>>>>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance
>>>> " xmlns:util="http://www.springframework.org/schema/util"
>>>>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>>>>        http://www.springframework.org/schema/beans/spring-beans.xsd
>>>>        http://www.springframework.org/schema/util
>>>>        http://www.springframework.org/schema/util/spring-util.xsd">
>>>>
>>>>     <!--
>>>>         Optional description.
>>>>     -->
>>>>     <description>
>>>>
>>>>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>>>>         Ignite node will start with this configuration by default.
>>>>     </description>
>>>>
>>>>     <!--
>>>>
>>>>         Initialize property configurer so we can reference environment variables.
>>>>     -->
>>>>
>>>>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>>>
>>>>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>>>>         <property name="searchSystemEnvironment" value="true"/>
>>>>     </bean>
>>>>
>>>>     <!--
>>>>
>>>>         Abstract IGFS file system configuration to be used as a template.
>>>>     -->
>>>>
>>>>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>>>>         <!-- Must correlate with cache affinity mapper. -->
>>>>         <property name="blockSize" value="#{128 * 1024}"/>
>>>>         <property name="perNodeBatchSize" value="512"/>
>>>>         <property name="perNodeParallelBatchCount" value="16"/>
>>>>
>>>>         <property name="prefetchBlocks" value="32"/>
>>>>     </bean>
>>>>
>>>>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>>>>   <!-- Store cache entries on-heap. -->
>>>>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>>>>
>>>>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>>>>
>>>>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>>>>   <!-- Configure eviction policy. -->
>>>>   <property name="evictionPolicy">
>>>>
>>>>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>>>>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>>>>       <property name="maxSize" value="800000"/>
>>>>     </bean>
>>>>   </property>
>>>>   </bean>
>>>>
>>>>     <!--
>>>>
>>>>         Abstract cache configuration for IGFS file data to be used as a template.
>>>>     -->
>>>>
>>>>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>>>         <property name="cacheMode" value="PARTITIONED"/>
>>>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>>>         <property name="backups" value="0"/>
>>>>         <property name="affinityMapper">
>>>>
>>>>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>>>>
>>>>                 <!-- How many sequential blocks will be stored on the same node. -->
>>>>                 <constructor-arg value="512"/>
>>>>             </bean>
>>>>         </property>
>>>>     </bean>
>>>>
>>>>     <!--
>>>>
>>>>         Abstract cache configuration for IGFS metadata to be used as a template.
>>>>     -->
>>>>
>>>>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>>>         <property name="cacheMode" value="REPLICATED"/>
>>>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>>>     </bean>
>>>>
>>>>     <!--
>>>>         Configuration of Ignite node.
>>>>     -->
>>>>
>>>>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>>>>         <!--
>>>>             Apache Hadoop Accelerator configuration.
>>>>         -->
>>>>         <property name="hadoopConfiguration">
>>>>
>>>>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>>>>
>>>>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>>>>                 <property name="finishedJobInfoTtl" value="30000"/>
>>>>             </bean>
>>>>         </property>
>>>>
>>>>         <!--
>>>>
>>>>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>>>>         -->
>>>>         <property name="connectorConfiguration">
>>>>
>>>>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>>>>                 <property name="port" value="11211"/>
>>>>             </bean>
>>>>         </property>
>>>>
>>>>         <!--
>>>>
>>>>             Configure one IGFS file system instance named "igfs" on this node.
>>>>         -->
>>>>         <property name="fileSystemConfiguration">
>>>>             <list>
>>>>
>>>>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>>>>                     <property name="name" value="igfs"/>
>>>>
>>>>                     <!-- Caches with these names must be configured. -->
>>>>                     <property name="metaCacheName" value="igfs-meta"/>
>>>>                     <property name="dataCacheName" value="igfs-data"/>
>>>>
>>>>
>>>>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>>>>                     <property name="ipcEndpointConfiguration">
>>>>
>>>>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>>>>                             <property name="type" value="TCP" />
>>>>                             <property name="host" value="0.0.0.0" />
>>>>                             <property name="port" value="10500" />
>>>>                         </bean>
>>>>                     </property>
>>>>
>>>>                     <!-- Sample secondary file system configuration.
>>>>
>>>>                         'uri'      - the URI of the secondary file system.
>>>>
>>>>                         'cfgPath'  - optional configuration path of the secondary file system,
>>>>
>>>>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>>>>
>>>>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>>>>
>>>>                             if Hadoop client and the Ignite node are running on behalf of different users.
>>>>                     -->
>>>>                     <property name="secondaryFileSystem">
>>>>
>>>>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>>>>
>>>>                             <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
>>>>
>>>>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>>>>
>>>>                             <constructor-arg name="userName" value="client-user-name"/>
>>>>                         </bean>
>>>>                     </property>
>>>>                 </bean>
>>>>             </list>
>>>>         </property>
>>>>
>>>>         <!--
>>>>             Caches needed by IGFS.
>>>>         -->
>>>>         <property name="cacheConfiguration">
>>>>             <list>
>>>>                 <!-- File system metadata cache. -->
>>>>
>>>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>>>>                     <property name="name" value="igfs-meta"/>
>>>>                 </bean>
>>>>
>>>>                 <!-- File system files data cache. -->
>>>>
>>>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>>>>                     <property name="name" value="igfs-data"/>
>>>>                 </bean>
>>>>             </list>
>>>>         </property>
>>>>
>>>>         <!--
>>>>             Disable events.
>>>>         -->
>>>>         <property name="includeEventTypes">
>>>>             <list>
>>>>
>>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>>>>
>>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>>>>
>>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>>>>             </list>
>>>>         </property>
>>>>
>>>>         <!--
>>>>
>>>>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>>>>         -->
>>>>         <property name="discoverySpi">
>>>>
>>>>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>>>                 <property name="ipFinder">
>>>>
>>>>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>>>                         <property name="addresses">
>>>>                             <list>
>>>>                                 <value>*.*.*.*</value>
>>>>                                 <value>*.*.*.*:47500..47509</value>
>>>>                             </list>
>>>>                         </property>
>>>>                     </bean>
>>>>                 </property>
>>>>             </bean>
>>>>         </property>
>>>>     </bean>
>>>> </beans>
>>>>
>>>> ------------------------------
>>>> liym@runstone.com
>>>> 北京润通丰华科技有限公司
>>>> 李宜明 like wind exist
>>>> 电话 13811682465
>>>>
>>>>
>>>> *From:* Vladimir Ozerov <vo...@gridgain.com>
>>>> *Date:* 2016-03-17 13:37
>>>> *To:* user <us...@ignite.apache.org>
>>>> *Subject:* Re: about mr accelerator question.
>>>> Hi,
>>>>
>>>> The fact that you can work with 29G cluster with only 8G of memory
>>>> might be
>>>> caused by the following things:
>>>> 1) Your job doesn't use all data form cluster and hence caches only
>>>> part of
>>>> it. This is the most likely case.
>>>> 2) You have eviction policy configured for IGFS data cache.
>>>> 3) Or may be you use offheap.
>>>> Please provide the full XML configuration and we will be able to
>>>> understand
>>>> it.
>>>>
>>>> Anyways, your initial question was about out-of-memory. Could you
>>>> provide
>>>> exact error message? Is it about heap memory or may be permgen?
>>>>
>>>> As per execution time, this depends on your workload. If there are lots
>>>> map
>>>> tasks and very active work with data, you will see improvement in
>>>> speed. If
>>>> there are lots operations on file system (e.g. mkdirs, move, etc.) and
>>>> very
>>>> little amount of map jobs, chances there will be no speedup at all.
>>>> Provide
>>>> more details on the job you test and type of data you use and we will be
>>>> able to give you more ideas on what to do.
>>>>
>>>> Vladimir.
>>>>
>>>>
>>>> --
>>>> View this message in context:
>>>> http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>
>>>>
>>>
>>
>

Re: Re: about mr accelerator question.

Posted by "liym@runstone.com" <li...@runstone.com>.
then i find some warn from log,so i change the default_config.xml

[17:31:23,540][WARN ][grid-nio-worker-2-#102%null%][TcpCommunicationSpi] Communication SPI Session write timed out (consider increasing 'socketWriteTimeout' configuration property) [remoteAddr=rslog5-tj/202.99.69.174:47100, writeTimeout=2000]

 
 <property name="communicationSpi">
    <bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
      <!-- Override local port. -->
      <property name="socketWriteTimeout" value="60000"/>
    </bean>
 </property>

but appear another error:
Mar 30, 2016 6:14:40 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
INFO: Client TCP connection closed: /202.99.69.174:11211
Exception in thread "main" java.io.IOException: Job tracker doesn't have any information about the job: job_05559fd1-37aa-4a52-aa38-02adf020972f_0001
at org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:186)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:325)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:610)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1355)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1317)
at mapreduce.DomainsSecondPVByIPMR.main(DomainsSecondPVByIPMR.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

from the log find another warn:

18:10:28,458][INFO ][Hadoop-task-05559fd1-37aa-4a52-aa38-02adf020972f_1-MAP-323-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:10:29,470][WARN ][grid-nio-worker-0-#100%null%][TcpCommunicationSpi] Failed to process selector key (will close): GridSelectorNioSessionImpl [selectorIdx=0, queueSize=217, writeBuf=java.nio.DirectByteBuffer[pos=12496 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], recovery=GridNioRecoveryDescriptor [acked=64416, resendCnt=0, rcvCnt=64715, reserved=true, lastAck=64704, nodeLeft=false, node=TcpDiscoveryNode [id=1a33b0e1-1627-4908-aa98-86d7fe19a8c5, addrs=[127.0.0.1, 202.99.96.170], sockAddrs=[rslog1-tj/202.99.96.170:47500, /127.0.0.1:47500, /202.99.96.170:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1459331261883, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], connected=true, connectCnt=0, queueLimit=5120], super=GridNioSessionImpl [locAddr=/202.99.69.170:47100, rmtAddr=/202.99.96.170:37587, createTime=1459331262015, closeTime=0, bytesSent=3934874712, bytesRcvd=4478411704, sndSchedTime=1459332629400, lastSndTime=1459332621394, lastRcvTime=1459332620807, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=o.a.i.i.util.nio.GridDirectParser@55fa1f31, directMode=true], GridConnectionBytesVerifyFilter], accepted=true]]
[18:10:29,503][WARN ][grid-nio-worker-0-#100%null%][TcpCommunicationSpi] Closing NIO session because of unhandled exception [cls=class o.a.i.i.util.nio.GridNioException, msg=Connection reset by peer]
[18:10:29,541][WARN ][disco-event-worker-#113%null%][GridDiscoveryManager] Node FAILED: TcpDiscoveryNode [id=1a33b0e1-1627-4908-aa98-86d7fe19a8c5, addrs=[127.0.0.1, 202.99.96.170], sockAddrs=[rslog1-tj/202.99.96.170:47500, /127.0.0.1:47500, /202.99.96.170:47500], discPort=47500, order=1, intOrder=1, lastExchangeTime=1459331261883, loc=false, ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false]
[18:10:29,543][INFO ][disco-event-worker-#113%null%][GridDiscoveryManager] Topology snapshot [ver=8, servers=5, clients=0, CPUs=120, heap=160.0GB]
[18:10:30,206][INFO ][Hadoop-task-05559fd1-37aa-4a52-aa38-02adf020972f_1-MAP-331-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:10:32,690][INFO ][Hadoop-task-05559fd1-37aa-4a52-aa38-02adf020972f_1-MAP-324-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:10:33,071][INFO ][exchange-worker-#115%null%][GridCachePartitionExchangeManager] Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=8, minorTopVer=0], evt=NODE_FAILED, node=1a33b0e1-1627-4908-aa98-86d7fe19a8c5]





From: liym@runstone.com
Date: 2016-03-30 17:36
To: user
Subject: Re: Re: about mr accelerator question.
I am so sorry that the description is not clearly.
in the error node, there is a exception

Mar 30, 2016 5:11:58 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
INFO: Client TCP connection closed: /202.99.69.178:11211
Exception in thread "main" Mar 30, 2016 5:11:58 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
INFO: Client TCP connection closed: /202.99.96.178:11211
java.io.IOException: Failed to get job status: job_c1de7618-b0f1-4159-ade4-57e305d4667f_0001
at org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:191)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:325)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:610)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1356)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1317)
at mapreduce.DomainsSecondPVByIPMR.main(DomainsSecondPVByIPMR.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: class org.apache.ignite.internal.client.impl.connection.GridClientConnectionResetException: Failed to perform request (connection failed): /202.99.96.178:11211
at org.apache.ignite.internal.client.impl.connection.GridClientConnection.getCloseReasonAsException(GridClientConnection.java:491)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:336)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:296)
at org.apache.ignite.internal.client.impl.connection.GridClientConnectionManagerAdapter$NioListener.onDisconnected(GridClientConnectionManagerAdapter.java:605)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionClosed(GridNioFilterChain.java:249)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionClosed(GridNioCodecFilter.java:70)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionClosed(GridNioServer.java:2115)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionClosed(GridNioFilterChain.java:147)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:1659)
at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:731)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeys(GridNioServer.java:1463)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1398)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1280)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)

 
From: Vladimir Ozerov
Date: 2016-03-29 19:53
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

Sorry, still do not understand the question well. Do you need to understand why the node was killed? Or something wrong happened to a cluster after the node had been killed?

Vladimir.

On Tue, Mar 29, 2016 at 4:50 AM, liym@runstone.com <li...@runstone.com> wrote:
One of nodes process is killed auto when excute the mr task, so other nodes could not send the message to the node which is killed.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$




 
From: Vladimir Ozerov
Date: 2016-03-28 18:57
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

I'm not sure I understand what error do you mean. At least, I do not see any exceptions in the log. Could you please clarify?

Vladimir.

On Mon, Mar 28, 2016 at 1:30 PM, liym@runstone.com <li...@runstone.com> wrote:
There is a question .now i have 6 ignite nodes.there is a error when the mr task is running.one node is killed usually,can you tell me why.thanks a lot.
on the one node or two node,I dont find this error.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$

all nodes have the same config
<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="3400000"/>

    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="300000"/>

            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://202.99.96.170:9000"/>

                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>202.99.96.170</value>
                                <value>202.99.69.170:47500..47509</value>
                                <value>202.99.96.174:47500..47509</value>
                                <value>202.99.96.178:47500..47509</value>
                                <value>202.99.69.174:47500..47509</value>
                                <value>202.99.69.178:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>

and the ignite.sh config is 
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#
# Grid command line loader.
#

#
# Import common functions.
#
if [ "${IGNITE_HOME}" = "" ];
    then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
    else IGNITE_HOME_TMP=${IGNITE_HOME};
fi

#
# Set SCRIPTS_HOME - base path to scripts.
#
SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"

source "${SCRIPTS_HOME}"/include/functions.sh

#
# Discover path to Java executable and check it's version.
#
checkJava

#
# Discover IGNITE_HOME environment variable.
#
setIgniteHome

if [ "${DEFAULT_CONFIG}" == "" ]; then
    DEFAULT_CONFIG=config/default-config.xml
fi

#
# Parse command line parameters.
#
. "${SCRIPTS_HOME}"/include/parseargs.sh

#
# Set IGNITE_LIBS.
#
. "${SCRIPTS_HOME}"/include/setenv.sh

CP="${IGNITE_LIBS}"

RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)

RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"

#
# Find available port for JMX
#
# You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
#
# This is executed when -nojmx is not specified
#
if [ "${NOJMX}" == "0" ] ; then
    findAvailableJmxPort
fi

# Mac OS specific support to display correct name in the dock.
osname=`uname`

if [ "${DOCK_OPTS}" == "" ]; then
    DOCK_OPTS="-Xdock:name=Ignite Node"
fi

#
# JVM options. See http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp for more details.
#
# ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
#
if [ -z "$JVM_OPTS" ] ; then
    JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
fi

#
# Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
#
# JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
# JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"

#
# Uncomment if you get StackOverflowError.
# On 64 bit systems this value can be larger, e.g. -Xss16m
#
# JVM_OPTS="${JVM_OPTS} -Xss4m"

#
# Uncomment to set preference for IPv4 stack.
#
# JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"

#
# Assertions are disabled by default since version 3.5.
# If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
#
ENABLE_ASSERTIONS="0"

#
# Set '-ea' options if assertions are enabled.
#
if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
    JVM_OPTS="${JVM_OPTS} -ea"
fi

#
# Set main class to start service (grid node by default).
#
if [ "${MAIN_CLASS}" = "" ]; then
    MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
fi

#
# Remote debugging (JPDA).
# Uncomment and change if remote debugging is required.
#
# JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"

ERRORCODE="-1"

while [ "${ERRORCODE}" -ne "130" ]
do
    if [ "${INTERACTIVE}" == "1" ] ; then
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
        esac
    else
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
        esac
    fi

    ERRORCODE="$?"

    if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
        break
    else
        rm -f "${RESTART_SUCCESS_FILE}"
    fi
done

if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
    rm -f "${RESTART_SUCCESS_FILE}"
fi

and the log info , it looks like normal.
[18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
    ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
    ^-- Public thread pool [active=9, idle=39, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
    ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
    ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
    ^-- Public thread pool [active=2, idle=46, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
    ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
    ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
    ^-- Public thread pool [active=3, idle=45, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]




 
From: Vladimir Ozerov
Date: 2016-03-24 21:00
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

Possible speedup greatly depends on the nature of your task. Typically, the more MR tasks you have and the more intensively you work with actual data, the bigger improvement could be achieved. Please give more details on what kind of jobs do you run and probably I will be able to suggest something. 

One possible change you can make to your config - switch temporal file system paths used by your jobs to PRIMARY mode. This way all temp data will reside only in memory and will not hit HDFS.

Vladimir.

On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com> wrote:
I am so glad to tell you the problem has been solved,thanks a lot. but the peformance improve only 300%, is there other good idea for config?
there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been finished.is there good suggestion.

the ignite config is 

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="800000"/>
    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="30000"/>
            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>*.*.*.*</value>
                                <value>*.*.*.*:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>



liym@runstone.com 
北京润通丰华科技有限公司
李宜明 like wind exist
电话 13811682465
 
From: Vladimir Ozerov
Date: 2016-03-17 13:37
To: user
Subject: Re: about mr accelerator question.
Hi,
 
The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.
 
Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?
 
As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.
 
Vladimir.
 
 
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




Re: Re: about mr accelerator question.

Posted by "liym@runstone.com" <li...@runstone.com>.
I am so sorry that the description is not clearly.
in the error node, there is a exception

Mar 30, 2016 5:11:58 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
INFO: Client TCP connection closed: /202.99.69.178:11211
Exception in thread "main" Mar 30, 2016 5:11:58 PM org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection close
INFO: Client TCP connection closed: /202.99.96.178:11211
java.io.IOException: Failed to get job status: job_c1de7618-b0f1-4159-ade4-57e305d4667f_0001
at org.apache.ignite.internal.processors.hadoop.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:191)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:325)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:322)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:322)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:610)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1356)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1317)
at mapreduce.DomainsSecondPVByIPMR.main(DomainsSecondPVByIPMR.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: class org.apache.ignite.internal.client.impl.connection.GridClientConnectionResetException: Failed to perform request (connection failed): /202.99.96.178:11211
at org.apache.ignite.internal.client.impl.connection.GridClientConnection.getCloseReasonAsException(GridClientConnection.java:491)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:336)
at org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection.close(GridClientNioTcpConnection.java:296)
at org.apache.ignite.internal.client.impl.connection.GridClientConnectionManagerAdapter$NioListener.onDisconnected(GridClientConnectionManagerAdapter.java:605)
at org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onSessionClosed(GridNioFilterChain.java:249)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
at org.apache.ignite.internal.util.nio.GridNioCodecFilter.onSessionClosed(GridNioCodecFilter.java:70)
at org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedSessionClosed(GridNioFilterAdapter.java:93)
at org.apache.ignite.internal.util.nio.GridNioServer$HeadFilter.onSessionClosed(GridNioServer.java:2115)
at org.apache.ignite.internal.util.nio.GridNioFilterChain.onSessionClosed(GridNioFilterChain.java:147)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.close(GridNioServer.java:1659)
at org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:731)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeys(GridNioServer.java:1463)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1398)
at org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1280)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)

 
From: Vladimir Ozerov
Date: 2016-03-29 19:53
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

Sorry, still do not understand the question well. Do you need to understand why the node was killed? Or something wrong happened to a cluster after the node had been killed?

Vladimir.

On Tue, Mar 29, 2016 at 4:50 AM, liym@runstone.com <li...@runstone.com> wrote:
One of nodes process is killed auto when excute the mr task, so other nodes could not send the message to the node which is killed.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$




 
From: Vladimir Ozerov
Date: 2016-03-28 18:57
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

I'm not sure I understand what error do you mean. At least, I do not see any exceptions in the log. Could you please clarify?

Vladimir.

On Mon, Mar 28, 2016 at 1:30 PM, liym@runstone.com <li...@runstone.com> wrote:
There is a question .now i have 6 ignite nodes.there is a error when the mr task is running.one node is killed usually,can you tell me why.thanks a lot.
on the one node or two node,I dont find this error.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$

all nodes have the same config
<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="3400000"/>

    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="300000"/>

            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://202.99.96.170:9000"/>

                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>202.99.96.170</value>
                                <value>202.99.69.170:47500..47509</value>
                                <value>202.99.96.174:47500..47509</value>
                                <value>202.99.96.178:47500..47509</value>
                                <value>202.99.69.174:47500..47509</value>
                                <value>202.99.69.178:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>

and the ignite.sh config is 
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#
# Grid command line loader.
#

#
# Import common functions.
#
if [ "${IGNITE_HOME}" = "" ];
    then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
    else IGNITE_HOME_TMP=${IGNITE_HOME};
fi

#
# Set SCRIPTS_HOME - base path to scripts.
#
SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"

source "${SCRIPTS_HOME}"/include/functions.sh

#
# Discover path to Java executable and check it's version.
#
checkJava

#
# Discover IGNITE_HOME environment variable.
#
setIgniteHome

if [ "${DEFAULT_CONFIG}" == "" ]; then
    DEFAULT_CONFIG=config/default-config.xml
fi

#
# Parse command line parameters.
#
. "${SCRIPTS_HOME}"/include/parseargs.sh

#
# Set IGNITE_LIBS.
#
. "${SCRIPTS_HOME}"/include/setenv.sh

CP="${IGNITE_LIBS}"

RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)

RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"

#
# Find available port for JMX
#
# You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
#
# This is executed when -nojmx is not specified
#
if [ "${NOJMX}" == "0" ] ; then
    findAvailableJmxPort
fi

# Mac OS specific support to display correct name in the dock.
osname=`uname`

if [ "${DOCK_OPTS}" == "" ]; then
    DOCK_OPTS="-Xdock:name=Ignite Node"
fi

#
# JVM options. See http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp for more details.
#
# ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
#
if [ -z "$JVM_OPTS" ] ; then
    JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
fi

#
# Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
#
# JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
# JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"

#
# Uncomment if you get StackOverflowError.
# On 64 bit systems this value can be larger, e.g. -Xss16m
#
# JVM_OPTS="${JVM_OPTS} -Xss4m"

#
# Uncomment to set preference for IPv4 stack.
#
# JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"

#
# Assertions are disabled by default since version 3.5.
# If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
#
ENABLE_ASSERTIONS="0"

#
# Set '-ea' options if assertions are enabled.
#
if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
    JVM_OPTS="${JVM_OPTS} -ea"
fi

#
# Set main class to start service (grid node by default).
#
if [ "${MAIN_CLASS}" = "" ]; then
    MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
fi

#
# Remote debugging (JPDA).
# Uncomment and change if remote debugging is required.
#
# JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"

ERRORCODE="-1"

while [ "${ERRORCODE}" -ne "130" ]
do
    if [ "${INTERACTIVE}" == "1" ] ; then
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
        esac
    else
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
        esac
    fi

    ERRORCODE="$?"

    if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
        break
    else
        rm -f "${RESTART_SUCCESS_FILE}"
    fi
done

if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
    rm -f "${RESTART_SUCCESS_FILE}"
fi

and the log info , it looks like normal.
[18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
    ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
    ^-- Public thread pool [active=9, idle=39, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
    ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
    ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
    ^-- Public thread pool [active=2, idle=46, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
    ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
    ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
    ^-- Public thread pool [active=3, idle=45, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]




 
From: Vladimir Ozerov
Date: 2016-03-24 21:00
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

Possible speedup greatly depends on the nature of your task. Typically, the more MR tasks you have and the more intensively you work with actual data, the bigger improvement could be achieved. Please give more details on what kind of jobs do you run and probably I will be able to suggest something. 

One possible change you can make to your config - switch temporal file system paths used by your jobs to PRIMARY mode. This way all temp data will reside only in memory and will not hit HDFS.

Vladimir.

On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com> wrote:
I am so glad to tell you the problem has been solved,thanks a lot. but the peformance improve only 300%, is there other good idea for config?
there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been finished.is there good suggestion.

the ignite config is 

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="800000"/>
    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="30000"/>
            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>*.*.*.*</value>
                                <value>*.*.*.*:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>



liym@runstone.com 
北京润通丰华科技有限公司
李宜明 like wind exist
电话 13811682465
 
From: Vladimir Ozerov
Date: 2016-03-17 13:37
To: user
Subject: Re: about mr accelerator question.
Hi,
 
The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.
 
Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?
 
As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.
 
Vladimir.
 
 
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




Re: Re: about mr accelerator question.

Posted by Vladimir Ozerov <vo...@gridgain.com>.
Hi,

Sorry, still do not understand the question well. Do you need to understand
why the node was killed? Or something wrong happened to a cluster after the
node had been killed?

Vladimir.

On Tue, Mar 29, 2016 at 4:50 AM, liym@runstone.com <li...@runstone.com>
wrote:

> One of nodes process is killed auto when excute the mr task, so other
> nodes could not send the message to the node which is killed.
>
> [17:42:52] Security status [authentication=off, tls/ssl=off]
> [17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
> [17:42:55] Performance suggestions for grid  (fix if possible)
> [17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>
> [17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
> [17:42:55]
>
> [17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
> [17:42:55]
> [17:42:55] Ignite node started OK (id=7965370b)
>
> [17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
>
> [17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
>
> [17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
>
> [17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
>
> [17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
>
> [17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
>
> [17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> *./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"*
> *hduser@rslog1-tj:~/ignite/bin$*
>
> ------------------------------
>
>
> *From:* Vladimir Ozerov <vo...@gridgain.com>
> *Date:* 2016-03-28 18:57
> *To:* user <us...@ignite.apache.org>
> *Subject:* Re: Re: about mr accelerator question.
> Hi,
>
> I'm not sure I understand what error do you mean. At least, I do not see
> any exceptions in the log. Could you please clarify?
>
> Vladimir.
>
> On Mon, Mar 28, 2016 at 1:30 PM, liym@runstone.com <li...@runstone.com>
> wrote:
>
>> There is a question .now i have 6 ignite nodes.there is a error when the
>> mr task is running.one node is killed usually,can you tell me why.thanks a
>> lot.
>> on the one node or two node,I dont find this error.
>>
>> [17:42:52] Security status [authentication=off, tls/ssl=off]
>> [17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
>> [17:42:55] Performance suggestions for grid  (fix if possible)
>> [17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>>
>> [17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
>> [17:42:55]
>>
>> [17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
>> [17:42:55]
>> [17:42:55] Ignite node started OK (id=7965370b)
>>
>> [17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
>>
>> [17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
>>
>> [17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
>>
>> [17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
>>
>> [17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
>>
>> [17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
>>
>> [17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>>
>> ./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>> hduser@rslog1-tj:~/ignite/bin$
>>
>> *all nodes have the same config*
>> <?xml version="1.0" encoding="UTF-8"?>
>>
>> <!--
>>   Licensed to the Apache Software Foundation (ASF) under one or more
>>   contributor license agreements.  See the NOTICE file distributed with
>>   this work for additional information regarding copyright ownership.
>>   The ASF licenses this file to You under the Apache License, Version 2.0
>>   (the "License"); you may not use this file except in compliance with
>>   the License.  You may obtain a copy of the License at
>>
>>        http://www.apache.org/licenses/LICENSE-2.0
>>
>>   Unless required by applicable law or agreed to in writing, software
>>   distributed under the License is distributed on an "AS IS" BASIS,
>>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>   See the License for the specific language governing permissions and
>>   limitations under the License.
>> -->
>>
>> <!--
>>     Ignite Spring configuration file.
>>
>>
>>     When starting a standalone Ignite node, you need to execute the following command:
>>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>>
>>
>>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>>     Ignition.start("path-to-this-file/default-config.xml");
>> -->
>> <beans xmlns="http://www.springframework.org/schema/beans"
>>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="
>> http://www.springframework.org/schema/util"
>>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>>        http://www.springframework.org/schema/beans/spring-beans.xsd
>>        http://www.springframework.org/schema/util
>>        http://www.springframework.org/schema/util/spring-util.xsd">
>>
>>     <!--
>>         Optional description.
>>     -->
>>     <description>
>>
>>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>>         Ignite node will start with this configuration by default.
>>     </description>
>>
>>     <!--
>>
>>         Initialize property configurer so we can reference environment variables.
>>     -->
>>
>>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>
>>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>>         <property name="searchSystemEnvironment" value="true"/>
>>     </bean>
>>
>>     <!--
>>         Abstract IGFS file system configuration to be used as a template.
>>     -->
>>
>>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>>         <!-- Must correlate with cache affinity mapper. -->
>>         <property name="blockSize" value="#{128 * 1024}"/>
>>         <property name="perNodeBatchSize" value="512"/>
>>         <property name="perNodeParallelBatchCount" value="16"/>
>>
>>         <property name="prefetchBlocks" value="32"/>
>>     </bean>
>>
>>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>>   <!-- Store cache entries on-heap. -->
>>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>>
>>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>>
>>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>>   <!-- Configure eviction policy. -->
>>   <property name="evictionPolicy">
>>
>>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>>       <property name="maxSize" value="3400000"/>
>>
>>     </bean>
>>   </property>
>>   </bean>
>>
>>     <!--
>>
>>         Abstract cache configuration for IGFS file data to be used as a template.
>>     -->
>>
>>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>         <property name="cacheMode" value="PARTITIONED"/>
>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>         <property name="backups" value="0"/>
>>         <property name="affinityMapper">
>>
>>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>>
>>                 <!-- How many sequential blocks will be stored on the same node. -->
>>                 <constructor-arg value="512"/>
>>             </bean>
>>         </property>
>>     </bean>
>>
>>     <!--
>>
>>         Abstract cache configuration for IGFS metadata to be used as a template.
>>     -->
>>
>>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>         <property name="cacheMode" value="REPLICATED"/>
>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>     </bean>
>>
>>     <!--
>>         Configuration of Ignite node.
>>     -->
>>
>>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>>         <!--
>>             Apache Hadoop Accelerator configuration.
>>         -->
>>         <property name="hadoopConfiguration">
>>
>>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>>
>>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>>                 <property name="finishedJobInfoTtl" value="300000"/>
>>
>>             </bean>
>>         </property>
>>
>>         <!--
>>
>>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>>         -->
>>         <property name="connectorConfiguration">
>>
>>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>>                 <property name="port" value="11211"/>
>>             </bean>
>>         </property>
>>
>>         <!--
>>
>>             Configure one IGFS file system instance named "igfs" on this node.
>>         -->
>>         <property name="fileSystemConfiguration">
>>             <list>
>>
>>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>>                     <property name="name" value="igfs"/>
>>
>>                     <!-- Caches with these names must be configured. -->
>>                     <property name="metaCacheName" value="igfs-meta"/>
>>                     <property name="dataCacheName" value="igfs-data"/>
>>
>>
>>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>>                     <property name="ipcEndpointConfiguration">
>>
>>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>>                             <property name="type" value="TCP" />
>>                             <property name="host" value="0.0.0.0" />
>>                             <property name="port" value="10500" />
>>                         </bean>
>>                     </property>
>>
>>                     <!-- Sample secondary file system configuration.
>>                         'uri'      - the URI of the secondary file system.
>>
>>                         'cfgPath'  - optional configuration path of the secondary file system,
>>
>>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>>
>>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>>
>>                             if Hadoop client and the Ignite node are running on behalf of different users.
>>                     -->
>>                     <property name="secondaryFileSystem">
>>
>>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>>                             <constructor-arg name="uri" value="hdfs://
>> 202.99.96.170:9000"/>
>>
>>
>>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>>
>>                             <constructor-arg name="userName" value="client-user-name"/>
>>                         </bean>
>>                     </property>
>>                 </bean>
>>             </list>
>>         </property>
>>
>>         <!--
>>             Caches needed by IGFS.
>>         -->
>>         <property name="cacheConfiguration">
>>             <list>
>>                 <!-- File system metadata cache. -->
>>
>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>>                     <property name="name" value="igfs-meta"/>
>>                 </bean>
>>
>>                 <!-- File system files data cache. -->
>>
>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>>                     <property name="name" value="igfs-data"/>
>>                 </bean>
>>             </list>
>>         </property>
>>
>>         <!--
>>             Disable events.
>>         -->
>>         <property name="includeEventTypes">
>>             <list>
>>
>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>>
>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>>
>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>>             </list>
>>         </property>
>>
>>         <!--
>>
>>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>>         -->
>>         <property name="discoverySpi">
>>
>>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>                 <property name="ipFinder">
>>
>>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>                         <property name="addresses">
>>                             <list>
>>                                 <value>202.99.96.170</value>
>>                                 <value>202.99.69.170:47500..47509</value>
>>                                 <value>202.99.96.174:47500..47509</value>
>>                                 <value>202.99.96.178:47500..47509</value>
>>                                 <value>202.99.69.174:47500..47509</value>
>>                                 <value>202.99.69.178:47500..47509</value>
>>                             </list>
>>                         </property>
>>                     </bean>
>>                 </property>
>>             </bean>
>>         </property>
>>     </bean>
>> </beans>
>>
>> *and the ignite.sh config is *
>> #!/bin/bash
>> #
>> # Licensed to the Apache Software Foundation (ASF) under one or more
>> # contributor license agreements.  See the NOTICE file distributed with
>> # this work for additional information regarding copyright ownership.
>> # The ASF licenses this file to You under the Apache License, Version 2.0
>> # (the "License"); you may not use this file except in compliance with
>> # the License.  You may obtain a copy of the License at
>> #
>> #      http://www.apache.org/licenses/LICENSE-2.0
>> #
>> # Unless required by applicable law or agreed to in writing, software
>> # distributed under the License is distributed on an "AS IS" BASIS,
>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>> # See the License for the specific language governing permissions and
>> # limitations under the License.
>> #
>>
>> #
>> # Grid command line loader.
>> #
>>
>> #
>> # Import common functions.
>> #
>> if [ "${IGNITE_HOME}" = "" ];
>>     then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
>>     else IGNITE_HOME_TMP=${IGNITE_HOME};
>> fi
>>
>> #
>> # Set SCRIPTS_HOME - base path to scripts.
>> #
>> SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"
>>
>> source "${SCRIPTS_HOME}"/include/functions.sh
>>
>> #
>> # Discover path to Java executable and check it's version.
>> #
>> checkJava
>>
>> #
>> # Discover IGNITE_HOME environment variable.
>> #
>> setIgniteHome
>>
>> if [ "${DEFAULT_CONFIG}" == "" ]; then
>>     DEFAULT_CONFIG=config/default-config.xml
>> fi
>>
>> #
>> # Parse command line parameters.
>> #
>> . "${SCRIPTS_HOME}"/include/parseargs.sh
>>
>> #
>> # Set IGNITE_LIBS.
>> #
>> . "${SCRIPTS_HOME}"/include/setenv.sh
>>
>> CP="${IGNITE_LIBS}"
>>
>>
>> RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)
>>
>> RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
>> RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"
>>
>> #
>> # Find available port for JMX
>> #
>>
>> # You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
>> #
>> # This is executed when -nojmx is not specified
>> #
>> if [ "${NOJMX}" == "0" ] ; then
>>     findAvailableJmxPort
>> fi
>>
>> # Mac OS specific support to display correct name in the dock.
>> osname=`uname`
>>
>> if [ "${DOCK_OPTS}" == "" ]; then
>>     DOCK_OPTS="-Xdock:name=Ignite Node"
>> fi
>>
>> #
>> # JVM options. See
>> http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
>>  for more details.
>> #
>> # ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
>> #
>> if [ -z "$JVM_OPTS" ] ; then
>>
>>     JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
>> fi
>>
>> #
>>
>> # Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
>> #
>>
>> # JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
>>
>> # JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"
>>
>> #
>> # Uncomment if you get StackOverflowError.
>> # On 64 bit systems this value can be larger, e.g. -Xss16m
>> #
>> # JVM_OPTS="${JVM_OPTS} -Xss4m"
>>
>> #
>> # Uncomment to set preference for IPv4 stack.
>> #
>> # JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"
>>
>> #
>> # Assertions are disabled by default since version 3.5.
>> # If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
>> #
>> ENABLE_ASSERTIONS="0"
>>
>> #
>> # Set '-ea' options if assertions are enabled.
>> #
>> if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
>>     JVM_OPTS="${JVM_OPTS} -ea"
>> fi
>>
>> #
>> # Set main class to start service (grid node by default).
>> #
>> if [ "${MAIN_CLASS}" = "" ]; then
>>     MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
>> fi
>>
>> #
>> # Remote debugging (JPDA).
>> # Uncomment and change if remote debugging is required.
>> #
>>
>> # JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"
>>
>> ERRORCODE="-1"
>>
>> while [ "${ERRORCODE}" -ne "130" ]
>> do
>>     if [ "${INTERACTIVE}" == "1" ] ; then
>>         case $osname in
>>             Darwin*)
>>
>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>                  -DIGNITE_HOME="${IGNITE_HOME}" \
>>
>>                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
>>             ;;
>>             *)
>>
>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>                  -DIGNITE_HOME="${IGNITE_HOME}" \
>>
>>                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
>>             ;;
>>         esac
>>     else
>>         case $osname in
>>             Darwin*)
>>
>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>                   -DIGNITE_HOME="${IGNITE_HOME}" \
>>
>>                  -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>>             ;;
>>             *)
>>
>>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>>                   -DIGNITE_HOME="${IGNITE_HOME}" \
>>
>>                  -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>>             ;;
>>         esac
>>     fi
>>
>>     ERRORCODE="$?"
>>
>>     if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
>>         break
>>     else
>>         rm -f "${RESTART_SUCCESS_FILE}"
>>     fi
>> done
>>
>> if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
>>     rm -f "${RESTART_SUCCESS_FILE}"
>> fi
>>
>> *and the log info , it looks like normal.*
>>
>> [18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>> [18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>     ^-- Node [id=7965370b, name=null]
>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>     ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
>>     ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
>>     ^-- Public thread pool [active=9, idle=39, qSize=0]
>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>     ^-- Outbound messages queue [size=0]
>>
>> [18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>> [18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>     ^-- Node [id=7965370b, name=null]
>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>     ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
>>     ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
>>     ^-- Public thread pool [active=5, idle=43, qSize=0]
>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>     ^-- Outbound messages queue [size=0]
>>
>> [18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>> [18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>     ^-- Node [id=7965370b, name=null]
>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>     ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
>>     ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
>>     ^-- Public thread pool [active=2, idle=46, qSize=0]
>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>     ^-- Outbound messages queue [size=0]
>>
>> [18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>> [18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>     ^-- Node [id=7965370b, name=null]
>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>     ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
>>     ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
>>     ^-- Public thread pool [active=5, idle=43, qSize=0]
>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>     ^-- Outbound messages queue [size=0]
>>
>> [18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>> [18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
>> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>>     ^-- Node [id=7965370b, name=null]
>>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>>     ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
>>     ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
>>     ^-- Public thread pool [active=3, idle=45, qSize=0]
>>     ^-- System thread pool [active=0, idle=48, qSize=0]
>>     ^-- Outbound messages queue [size=0]
>>
>> [18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> [18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>>
>> ------------------------------
>>
>>
>> *From:* Vladimir Ozerov <vo...@gridgain.com>
>> *Date:* 2016-03-24 21:00
>> *To:* user <us...@ignite.apache.org>
>> *Subject:* Re: Re: about mr accelerator question.
>> Hi,
>>
>> Possible speedup greatly depends on the nature of your task. Typically,
>> the more MR tasks you have and the more intensively you work with actual
>> data, the bigger improvement could be achieved. Please give more details on
>> what kind of jobs do you run and probably I will be able to suggest
>> something.
>>
>> One possible change you can make to your config - switch temporal file
>> system paths used by your jobs to PRIMARY mode. This way all temp data will
>> reside only in memory and will not hit HDFS.
>>
>> Vladimir.
>>
>> On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com>
>> wrote:
>>
>>> I am so glad to tell you the problem has been solved,thanks a lot. but
>>> the peformance improve only 300%, is there other good idea for config?
>>>
>>> there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been
>>> finished.is there good suggestion.
>>>
>>> the ignite config is
>>>
>>> <?xml version="1.0" encoding="UTF-8"?>
>>>
>>> <!--
>>>   Licensed to the Apache Software Foundation (ASF) under one or more
>>>   contributor license agreements.  See the NOTICE file distributed with
>>>   this work for additional information regarding copyright ownership.
>>>   The ASF licenses this file to You under the Apache License, Version 2.0
>>>   (the "License"); you may not use this file except in compliance with
>>>   the License.  You may obtain a copy of the License at
>>>
>>>        http://www.apache.org/licenses/LICENSE-2.0
>>>
>>>   Unless required by applicable law or agreed to in writing, software
>>>   distributed under the License is distributed on an "AS IS" BASIS,
>>>
>>>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>>   See the License for the specific language governing permissions and
>>>   limitations under the License.
>>> -->
>>>
>>> <!--
>>>     Ignite Spring configuration file.
>>>
>>>
>>>     When starting a standalone Ignite node, you need to execute the following command:
>>>
>>>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>>>
>>>
>>>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>>>     Ignition.start("path-to-this-file/default-config.xml");
>>> -->
>>> <beans xmlns="http://www.springframework.org/schema/beans"
>>>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance
>>> " xmlns:util="http://www.springframework.org/schema/util"
>>>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>>>        http://www.springframework.org/schema/beans/spring-beans.xsd
>>>        http://www.springframework.org/schema/util
>>>        http://www.springframework.org/schema/util/spring-util.xsd">
>>>
>>>     <!--
>>>         Optional description.
>>>     -->
>>>     <description>
>>>
>>>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>>>         Ignite node will start with this configuration by default.
>>>     </description>
>>>
>>>     <!--
>>>
>>>         Initialize property configurer so we can reference environment variables.
>>>     -->
>>>
>>>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>>
>>>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>>>         <property name="searchSystemEnvironment" value="true"/>
>>>     </bean>
>>>
>>>     <!--
>>>         Abstract IGFS file system configuration to be used as a template.
>>>     -->
>>>
>>>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>>>         <!-- Must correlate with cache affinity mapper. -->
>>>         <property name="blockSize" value="#{128 * 1024}"/>
>>>         <property name="perNodeBatchSize" value="512"/>
>>>         <property name="perNodeParallelBatchCount" value="16"/>
>>>
>>>         <property name="prefetchBlocks" value="32"/>
>>>     </bean>
>>>
>>>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>>>   <!-- Store cache entries on-heap. -->
>>>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>>>
>>>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>>>
>>>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>>>   <!-- Configure eviction policy. -->
>>>   <property name="evictionPolicy">
>>>
>>>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>>>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>>>       <property name="maxSize" value="800000"/>
>>>     </bean>
>>>   </property>
>>>   </bean>
>>>
>>>     <!--
>>>
>>>         Abstract cache configuration for IGFS file data to be used as a template.
>>>     -->
>>>
>>>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>>         <property name="cacheMode" value="PARTITIONED"/>
>>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>>         <property name="backups" value="0"/>
>>>         <property name="affinityMapper">
>>>
>>>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>>>
>>>                 <!-- How many sequential blocks will be stored on the same node. -->
>>>                 <constructor-arg value="512"/>
>>>             </bean>
>>>         </property>
>>>     </bean>
>>>
>>>     <!--
>>>
>>>         Abstract cache configuration for IGFS metadata to be used as a template.
>>>     -->
>>>
>>>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>>         <property name="cacheMode" value="REPLICATED"/>
>>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>>     </bean>
>>>
>>>     <!--
>>>         Configuration of Ignite node.
>>>     -->
>>>
>>>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>>>         <!--
>>>             Apache Hadoop Accelerator configuration.
>>>         -->
>>>         <property name="hadoopConfiguration">
>>>
>>>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>>>
>>>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>>>                 <property name="finishedJobInfoTtl" value="30000"/>
>>>             </bean>
>>>         </property>
>>>
>>>         <!--
>>>
>>>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>>>         -->
>>>         <property name="connectorConfiguration">
>>>
>>>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>>>                 <property name="port" value="11211"/>
>>>             </bean>
>>>         </property>
>>>
>>>         <!--
>>>
>>>             Configure one IGFS file system instance named "igfs" on this node.
>>>         -->
>>>         <property name="fileSystemConfiguration">
>>>             <list>
>>>
>>>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>>>                     <property name="name" value="igfs"/>
>>>
>>>                     <!-- Caches with these names must be configured. -->
>>>                     <property name="metaCacheName" value="igfs-meta"/>
>>>                     <property name="dataCacheName" value="igfs-data"/>
>>>
>>>
>>>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>>>                     <property name="ipcEndpointConfiguration">
>>>
>>>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>>>                             <property name="type" value="TCP" />
>>>                             <property name="host" value="0.0.0.0" />
>>>                             <property name="port" value="10500" />
>>>                         </bean>
>>>                     </property>
>>>
>>>                     <!-- Sample secondary file system configuration.
>>>
>>>                         'uri'      - the URI of the secondary file system.
>>>
>>>                         'cfgPath'  - optional configuration path of the secondary file system,
>>>
>>>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>>>
>>>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>>>
>>>                             if Hadoop client and the Ignite node are running on behalf of different users.
>>>                     -->
>>>                     <property name="secondaryFileSystem">
>>>
>>>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>>>
>>>                             <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
>>>
>>>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>>>
>>>                             <constructor-arg name="userName" value="client-user-name"/>
>>>                         </bean>
>>>                     </property>
>>>                 </bean>
>>>             </list>
>>>         </property>
>>>
>>>         <!--
>>>             Caches needed by IGFS.
>>>         -->
>>>         <property name="cacheConfiguration">
>>>             <list>
>>>                 <!-- File system metadata cache. -->
>>>
>>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>>>                     <property name="name" value="igfs-meta"/>
>>>                 </bean>
>>>
>>>                 <!-- File system files data cache. -->
>>>
>>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>>>                     <property name="name" value="igfs-data"/>
>>>                 </bean>
>>>             </list>
>>>         </property>
>>>
>>>         <!--
>>>             Disable events.
>>>         -->
>>>         <property name="includeEventTypes">
>>>             <list>
>>>
>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>>>
>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>>>
>>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>>>             </list>
>>>         </property>
>>>
>>>         <!--
>>>
>>>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>>>         -->
>>>         <property name="discoverySpi">
>>>
>>>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>>                 <property name="ipFinder">
>>>
>>>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>>                         <property name="addresses">
>>>                             <list>
>>>                                 <value>*.*.*.*</value>
>>>                                 <value>*.*.*.*:47500..47509</value>
>>>                             </list>
>>>                         </property>
>>>                     </bean>
>>>                 </property>
>>>             </bean>
>>>         </property>
>>>     </bean>
>>> </beans>
>>>
>>> ------------------------------
>>> liym@runstone.com
>>> 北京润通丰华科技有限公司
>>> 李宜明 like wind exist
>>> 电话 13811682465
>>>
>>>
>>> *From:* Vladimir Ozerov <vo...@gridgain.com>
>>> *Date:* 2016-03-17 13:37
>>> *To:* user <us...@ignite.apache.org>
>>> *Subject:* Re: about mr accelerator question.
>>> Hi,
>>>
>>> The fact that you can work with 29G cluster with only 8G of memory might
>>> be
>>> caused by the following things:
>>> 1) Your job doesn't use all data form cluster and hence caches only part
>>> of
>>> it. This is the most likely case.
>>> 2) You have eviction policy configured for IGFS data cache.
>>> 3) Or may be you use offheap.
>>> Please provide the full XML configuration and we will be able to
>>> understand
>>> it.
>>>
>>> Anyways, your initial question was about out-of-memory. Could you provide
>>> exact error message? Is it about heap memory or may be permgen?
>>>
>>> As per execution time, this depends on your workload. If there are lots
>>> map
>>> tasks and very active work with data, you will see improvement in speed.
>>> If
>>> there are lots operations on file system (e.g. mkdirs, move, etc.) and
>>> very
>>> little amount of map jobs, chances there will be no speedup at all.
>>> Provide
>>> more details on the job you test and type of data you use and we will be
>>> able to give you more ideas on what to do.
>>>
>>> Vladimir.
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>>
>>
>

Re: Re: about mr accelerator question.

Posted by "liym@runstone.com" <li...@runstone.com>.
One of nodes process is killed auto when excute the mr task, so other nodes could not send the message to the node which is killed.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$




 
From: Vladimir Ozerov
Date: 2016-03-28 18:57
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

I'm not sure I understand what error do you mean. At least, I do not see any exceptions in the log. Could you please clarify?

Vladimir.

On Mon, Mar 28, 2016 at 1:30 PM, liym@runstone.com <li...@runstone.com> wrote:
There is a question .now i have 6 ignite nodes.there is a error when the mr task is running.one node is killed usually,can you tell me why.thanks a lot.
on the one node or two node,I dont find this error.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$

all nodes have the same config
<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="3400000"/>

    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="300000"/>

            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://202.99.96.170:9000"/>

                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>202.99.96.170</value>
                                <value>202.99.69.170:47500..47509</value>
                                <value>202.99.96.174:47500..47509</value>
                                <value>202.99.96.178:47500..47509</value>
                                <value>202.99.69.174:47500..47509</value>
                                <value>202.99.69.178:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>

and the ignite.sh config is 
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#
# Grid command line loader.
#

#
# Import common functions.
#
if [ "${IGNITE_HOME}" = "" ];
    then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
    else IGNITE_HOME_TMP=${IGNITE_HOME};
fi

#
# Set SCRIPTS_HOME - base path to scripts.
#
SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"

source "${SCRIPTS_HOME}"/include/functions.sh

#
# Discover path to Java executable and check it's version.
#
checkJava

#
# Discover IGNITE_HOME environment variable.
#
setIgniteHome

if [ "${DEFAULT_CONFIG}" == "" ]; then
    DEFAULT_CONFIG=config/default-config.xml
fi

#
# Parse command line parameters.
#
. "${SCRIPTS_HOME}"/include/parseargs.sh

#
# Set IGNITE_LIBS.
#
. "${SCRIPTS_HOME}"/include/setenv.sh

CP="${IGNITE_LIBS}"

RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)

RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"

#
# Find available port for JMX
#
# You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
#
# This is executed when -nojmx is not specified
#
if [ "${NOJMX}" == "0" ] ; then
    findAvailableJmxPort
fi

# Mac OS specific support to display correct name in the dock.
osname=`uname`

if [ "${DOCK_OPTS}" == "" ]; then
    DOCK_OPTS="-Xdock:name=Ignite Node"
fi

#
# JVM options. See http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp for more details.
#
# ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
#
if [ -z "$JVM_OPTS" ] ; then
    JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
fi

#
# Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
#
# JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
# JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"

#
# Uncomment if you get StackOverflowError.
# On 64 bit systems this value can be larger, e.g. -Xss16m
#
# JVM_OPTS="${JVM_OPTS} -Xss4m"

#
# Uncomment to set preference for IPv4 stack.
#
# JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"

#
# Assertions are disabled by default since version 3.5.
# If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
#
ENABLE_ASSERTIONS="0"

#
# Set '-ea' options if assertions are enabled.
#
if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
    JVM_OPTS="${JVM_OPTS} -ea"
fi

#
# Set main class to start service (grid node by default).
#
if [ "${MAIN_CLASS}" = "" ]; then
    MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
fi

#
# Remote debugging (JPDA).
# Uncomment and change if remote debugging is required.
#
# JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"

ERRORCODE="-1"

while [ "${ERRORCODE}" -ne "130" ]
do
    if [ "${INTERACTIVE}" == "1" ] ; then
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
        esac
    else
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
        esac
    fi

    ERRORCODE="$?"

    if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
        break
    else
        rm -f "${RESTART_SUCCESS_FILE}"
    fi
done

if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
    rm -f "${RESTART_SUCCESS_FILE}"
fi

and the log info , it looks like normal.
[18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
    ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
    ^-- Public thread pool [active=9, idle=39, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
    ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
    ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
    ^-- Public thread pool [active=2, idle=46, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
    ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
    ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
    ^-- Public thread pool [active=3, idle=45, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]




 
From: Vladimir Ozerov
Date: 2016-03-24 21:00
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

Possible speedup greatly depends on the nature of your task. Typically, the more MR tasks you have and the more intensively you work with actual data, the bigger improvement could be achieved. Please give more details on what kind of jobs do you run and probably I will be able to suggest something. 

One possible change you can make to your config - switch temporal file system paths used by your jobs to PRIMARY mode. This way all temp data will reside only in memory and will not hit HDFS.

Vladimir.

On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com> wrote:
I am so glad to tell you the problem has been solved,thanks a lot. but the peformance improve only 300%, is there other good idea for config?
there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been finished.is there good suggestion.

the ignite config is 

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="800000"/>
    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="30000"/>
            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>*.*.*.*</value>
                                <value>*.*.*.*:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>



liym@runstone.com 
北京润通丰华科技有限公司
李宜明 like wind exist
电话 13811682465
 
From: Vladimir Ozerov
Date: 2016-03-17 13:37
To: user
Subject: Re: about mr accelerator question.
Hi,
 
The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.
 
Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?
 
As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.
 
Vladimir.
 
 
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Re: about mr accelerator question.

Posted by Vladimir Ozerov <vo...@gridgain.com>.
Hi,

I'm not sure I understand what error do you mean. At least, I do not see
any exceptions in the log. Could you please clarify?

Vladimir.

On Mon, Mar 28, 2016 at 1:30 PM, liym@runstone.com <li...@runstone.com>
wrote:

> There is a question .now i have 6 ignite nodes.there is a error when the
> mr task is running.one node is killed usually,can you tell me why.thanks a
> lot.
> on the one node or two node,I dont find this error.
>
> [17:42:52] Security status [authentication=off, tls/ssl=off]
> [17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
> [17:42:55] Performance suggestions for grid  (fix if possible)
> [17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
>
> [17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
> [17:42:55]
>
> [17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
> [17:42:55]
> [17:42:55] Ignite node started OK (id=7965370b)
>
> [17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
>
> [17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
>
> [17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
>
> [17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
>
> [17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
>
> [17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
>
> [17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> [17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
>
> ./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
> hduser@rslog1-tj:~/ignite/bin$
>
> *all nodes have the same config*
> <?xml version="1.0" encoding="UTF-8"?>
>
> <!--
>   Licensed to the Apache Software Foundation (ASF) under one or more
>   contributor license agreements.  See the NOTICE file distributed with
>   this work for additional information regarding copyright ownership.
>   The ASF licenses this file to You under the Apache License, Version 2.0
>   (the "License"); you may not use this file except in compliance with
>   the License.  You may obtain a copy of the License at
>
>        http://www.apache.org/licenses/LICENSE-2.0
>
>   Unless required by applicable law or agreed to in writing, software
>   distributed under the License is distributed on an "AS IS" BASIS,
>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>   See the License for the specific language governing permissions and
>   limitations under the License.
> -->
>
> <!--
>     Ignite Spring configuration file.
>
>
>     When starting a standalone Ignite node, you need to execute the following command:
>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>
>
>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>     Ignition.start("path-to-this-file/default-config.xml");
> -->
> <beans xmlns="http://www.springframework.org/schema/beans"
>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="
> http://www.springframework.org/schema/util"
>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>        http://www.springframework.org/schema/beans/spring-beans.xsd
>        http://www.springframework.org/schema/util
>        http://www.springframework.org/schema/util/spring-util.xsd">
>
>     <!--
>         Optional description.
>     -->
>     <description>
>
>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>         Ignite node will start with this configuration by default.
>     </description>
>
>     <!--
>
>         Initialize property configurer so we can reference environment variables.
>     -->
>
>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>
>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>         <property name="searchSystemEnvironment" value="true"/>
>     </bean>
>
>     <!--
>         Abstract IGFS file system configuration to be used as a template.
>     -->
>
>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>         <!-- Must correlate with cache affinity mapper. -->
>         <property name="blockSize" value="#{128 * 1024}"/>
>         <property name="perNodeBatchSize" value="512"/>
>         <property name="perNodeParallelBatchCount" value="16"/>
>
>         <property name="prefetchBlocks" value="32"/>
>     </bean>
>
>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>   <!-- Store cache entries on-heap. -->
>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>
>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>   <!-- Configure eviction policy. -->
>   <property name="evictionPolicy">
>
>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>       <property name="maxSize" value="3400000"/>
>
>     </bean>
>   </property>
>   </bean>
>
>     <!--
>
>         Abstract cache configuration for IGFS file data to be used as a template.
>     -->
>
>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>         <property name="cacheMode" value="PARTITIONED"/>
>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>         <property name="backups" value="0"/>
>         <property name="affinityMapper">
>
>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>
>                 <!-- How many sequential blocks will be stored on the same node. -->
>                 <constructor-arg value="512"/>
>             </bean>
>         </property>
>     </bean>
>
>     <!--
>
>         Abstract cache configuration for IGFS metadata to be used as a template.
>     -->
>
>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>         <property name="cacheMode" value="REPLICATED"/>
>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>     </bean>
>
>     <!--
>         Configuration of Ignite node.
>     -->
>
>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>         <!--
>             Apache Hadoop Accelerator configuration.
>         -->
>         <property name="hadoopConfiguration">
>
>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>
>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>                 <property name="finishedJobInfoTtl" value="300000"/>
>
>             </bean>
>         </property>
>
>         <!--
>
>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>         -->
>         <property name="connectorConfiguration">
>
>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>                 <property name="port" value="11211"/>
>             </bean>
>         </property>
>
>         <!--
>
>             Configure one IGFS file system instance named "igfs" on this node.
>         -->
>         <property name="fileSystemConfiguration">
>             <list>
>
>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>                     <property name="name" value="igfs"/>
>
>                     <!-- Caches with these names must be configured. -->
>                     <property name="metaCacheName" value="igfs-meta"/>
>                     <property name="dataCacheName" value="igfs-data"/>
>
>
>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>                     <property name="ipcEndpointConfiguration">
>
>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>                             <property name="type" value="TCP" />
>                             <property name="host" value="0.0.0.0" />
>                             <property name="port" value="10500" />
>                         </bean>
>                     </property>
>
>                     <!-- Sample secondary file system configuration.
>                         'uri'      - the URI of the secondary file system.
>
>                         'cfgPath'  - optional configuration path of the secondary file system,
>
>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>
>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>
>                             if Hadoop client and the Ignite node are running on behalf of different users.
>                     -->
>                     <property name="secondaryFileSystem">
>
>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>                             <constructor-arg name="uri" value="hdfs://
> 202.99.96.170:9000"/>
>
>
>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>
>                             <constructor-arg name="userName" value="client-user-name"/>
>                         </bean>
>                     </property>
>                 </bean>
>             </list>
>         </property>
>
>         <!--
>             Caches needed by IGFS.
>         -->
>         <property name="cacheConfiguration">
>             <list>
>                 <!-- File system metadata cache. -->
>
>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>                     <property name="name" value="igfs-meta"/>
>                 </bean>
>
>                 <!-- File system files data cache. -->
>
>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>                     <property name="name" value="igfs-data"/>
>                 </bean>
>             </list>
>         </property>
>
>         <!--
>             Disable events.
>         -->
>         <property name="includeEventTypes">
>             <list>
>
>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>
>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>
>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>             </list>
>         </property>
>
>         <!--
>
>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>         -->
>         <property name="discoverySpi">
>
>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>                 <property name="ipFinder">
>
>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>                         <property name="addresses">
>                             <list>
>                                 <value>202.99.96.170</value>
>                                 <value>202.99.69.170:47500..47509</value>
>                                 <value>202.99.96.174:47500..47509</value>
>                                 <value>202.99.96.178:47500..47509</value>
>                                 <value>202.99.69.174:47500..47509</value>
>                                 <value>202.99.69.178:47500..47509</value>
>                             </list>
>                         </property>
>                     </bean>
>                 </property>
>             </bean>
>         </property>
>     </bean>
> </beans>
>
> *and the ignite.sh config is *
> #!/bin/bash
> #
> # Licensed to the Apache Software Foundation (ASF) under one or more
> # contributor license agreements.  See the NOTICE file distributed with
> # this work for additional information regarding copyright ownership.
> # The ASF licenses this file to You under the Apache License, Version 2.0
> # (the "License"); you may not use this file except in compliance with
> # the License.  You may obtain a copy of the License at
> #
> #      http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
>
> #
> # Grid command line loader.
> #
>
> #
> # Import common functions.
> #
> if [ "${IGNITE_HOME}" = "" ];
>     then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
>     else IGNITE_HOME_TMP=${IGNITE_HOME};
> fi
>
> #
> # Set SCRIPTS_HOME - base path to scripts.
> #
> SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"
>
> source "${SCRIPTS_HOME}"/include/functions.sh
>
> #
> # Discover path to Java executable and check it's version.
> #
> checkJava
>
> #
> # Discover IGNITE_HOME environment variable.
> #
> setIgniteHome
>
> if [ "${DEFAULT_CONFIG}" == "" ]; then
>     DEFAULT_CONFIG=config/default-config.xml
> fi
>
> #
> # Parse command line parameters.
> #
> . "${SCRIPTS_HOME}"/include/parseargs.sh
>
> #
> # Set IGNITE_LIBS.
> #
> . "${SCRIPTS_HOME}"/include/setenv.sh
>
> CP="${IGNITE_LIBS}"
>
>
> RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)
>
> RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
> RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"
>
> #
> # Find available port for JMX
> #
>
> # You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
> #
> # This is executed when -nojmx is not specified
> #
> if [ "${NOJMX}" == "0" ] ; then
>     findAvailableJmxPort
> fi
>
> # Mac OS specific support to display correct name in the dock.
> osname=`uname`
>
> if [ "${DOCK_OPTS}" == "" ]; then
>     DOCK_OPTS="-Xdock:name=Ignite Node"
> fi
>
> #
> # JVM options. See
> http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp
>  for more details.
> #
> # ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
> #
> if [ -z "$JVM_OPTS" ] ; then
>
>     JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
> fi
>
> #
>
> # Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
> #
>
> # JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
>
> # JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"
>
> #
> # Uncomment if you get StackOverflowError.
> # On 64 bit systems this value can be larger, e.g. -Xss16m
> #
> # JVM_OPTS="${JVM_OPTS} -Xss4m"
>
> #
> # Uncomment to set preference for IPv4 stack.
> #
> # JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"
>
> #
> # Assertions are disabled by default since version 3.5.
> # If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
> #
> ENABLE_ASSERTIONS="0"
>
> #
> # Set '-ea' options if assertions are enabled.
> #
> if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
>     JVM_OPTS="${JVM_OPTS} -ea"
> fi
>
> #
> # Set main class to start service (grid node by default).
> #
> if [ "${MAIN_CLASS}" = "" ]; then
>     MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
> fi
>
> #
> # Remote debugging (JPDA).
> # Uncomment and change if remote debugging is required.
> #
>
> # JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"
>
> ERRORCODE="-1"
>
> while [ "${ERRORCODE}" -ne "130" ]
> do
>     if [ "${INTERACTIVE}" == "1" ] ; then
>         case $osname in
>             Darwin*)
>
>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>                  -DIGNITE_HOME="${IGNITE_HOME}" \
>
>                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
>             ;;
>             *)
>
>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>                  -DIGNITE_HOME="${IGNITE_HOME}" \
>
>                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
>             ;;
>         esac
>     else
>         case $osname in
>             Darwin*)
>
>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>                   -DIGNITE_HOME="${IGNITE_HOME}" \
>
>                  -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>             ;;
>             *)
>
>                 "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
>                   -DIGNITE_HOME="${IGNITE_HOME}" \
>
>                  -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
>             ;;
>         esac
>     fi
>
>     ERRORCODE="$?"
>
>     if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
>         break
>     else
>         rm -f "${RESTART_SUCCESS_FILE}"
>     fi
> done
>
> if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
>     rm -f "${RESTART_SUCCESS_FILE}"
> fi
>
> *and the log info , it looks like normal.*
>
> [18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
> [18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>     ^-- Node [id=7965370b, name=null]
>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>     ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
>     ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
>     ^-- Public thread pool [active=9, idle=39, qSize=0]
>     ^-- System thread pool [active=0, idle=48, qSize=0]
>     ^-- Outbound messages queue [size=0]
>
> [18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
> [18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>     ^-- Node [id=7965370b, name=null]
>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>     ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
>     ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
>     ^-- Public thread pool [active=5, idle=43, qSize=0]
>     ^-- System thread pool [active=0, idle=48, qSize=0]
>     ^-- Outbound messages queue [size=0]
>
> [18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
> [18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>     ^-- Node [id=7965370b, name=null]
>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>     ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
>     ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
>     ^-- Public thread pool [active=2, idle=46, qSize=0]
>     ^-- System thread pool [active=0, idle=48, qSize=0]
>     ^-- Outbound messages queue [size=0]
>
> [18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
> [18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>     ^-- Node [id=7965370b, name=null]
>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>     ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
>     ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
>     ^-- Public thread pool [active=5, idle=43, qSize=0]
>     ^-- System thread pool [active=0, idle=48, qSize=0]
>     ^-- Outbound messages queue [size=0]
>
> [18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
> [18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
>     ^-- Node [id=7965370b, name=null]
>     ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
>     ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
>     ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
>     ^-- Public thread pool [active=3, idle=45, qSize=0]
>     ^-- System thread pool [active=0, idle=48, qSize=0]
>     ^-- Outbound messages queue [size=0]
>
> [18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
>
> [18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
>
> ------------------------------
>
>
> *From:* Vladimir Ozerov <vo...@gridgain.com>
> *Date:* 2016-03-24 21:00
> *To:* user <us...@ignite.apache.org>
> *Subject:* Re: Re: about mr accelerator question.
> Hi,
>
> Possible speedup greatly depends on the nature of your task. Typically,
> the more MR tasks you have and the more intensively you work with actual
> data, the bigger improvement could be achieved. Please give more details on
> what kind of jobs do you run and probably I will be able to suggest
> something.
>
> One possible change you can make to your config - switch temporal file
> system paths used by your jobs to PRIMARY mode. This way all temp data will
> reside only in memory and will not hit HDFS.
>
> Vladimir.
>
> On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com>
> wrote:
>
>> I am so glad to tell you the problem has been solved,thanks a lot. but
>> the peformance improve only 300%, is there other good idea for config?
>>
>> there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been
>> finished.is there good suggestion.
>>
>> the ignite config is
>>
>> <?xml version="1.0" encoding="UTF-8"?>
>>
>> <!--
>>   Licensed to the Apache Software Foundation (ASF) under one or more
>>   contributor license agreements.  See the NOTICE file distributed with
>>   this work for additional information regarding copyright ownership.
>>   The ASF licenses this file to You under the Apache License, Version 2.0
>>   (the "License"); you may not use this file except in compliance with
>>   the License.  You may obtain a copy of the License at
>>
>>        http://www.apache.org/licenses/LICENSE-2.0
>>
>>   Unless required by applicable law or agreed to in writing, software
>>   distributed under the License is distributed on an "AS IS" BASIS,
>>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>   See the License for the specific language governing permissions and
>>   limitations under the License.
>> -->
>>
>> <!--
>>     Ignite Spring configuration file.
>>
>>
>>     When starting a standalone Ignite node, you need to execute the following command:
>>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>>
>>
>>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>>     Ignition.start("path-to-this-file/default-config.xml");
>> -->
>> <beans xmlns="http://www.springframework.org/schema/beans"
>>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="
>> http://www.springframework.org/schema/util"
>>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>>        http://www.springframework.org/schema/beans/spring-beans.xsd
>>        http://www.springframework.org/schema/util
>>        http://www.springframework.org/schema/util/spring-util.xsd">
>>
>>     <!--
>>         Optional description.
>>     -->
>>     <description>
>>
>>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>>         Ignite node will start with this configuration by default.
>>     </description>
>>
>>     <!--
>>
>>         Initialize property configurer so we can reference environment variables.
>>     -->
>>
>>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>
>>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>>         <property name="searchSystemEnvironment" value="true"/>
>>     </bean>
>>
>>     <!--
>>         Abstract IGFS file system configuration to be used as a template.
>>     -->
>>
>>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>>         <!-- Must correlate with cache affinity mapper. -->
>>         <property name="blockSize" value="#{128 * 1024}"/>
>>         <property name="perNodeBatchSize" value="512"/>
>>         <property name="perNodeParallelBatchCount" value="16"/>
>>
>>         <property name="prefetchBlocks" value="32"/>
>>     </bean>
>>
>>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>>   <!-- Store cache entries on-heap. -->
>>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>>
>>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>>
>>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>>   <!-- Configure eviction policy. -->
>>   <property name="evictionPolicy">
>>
>>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>>       <property name="maxSize" value="800000"/>
>>     </bean>
>>   </property>
>>   </bean>
>>
>>     <!--
>>
>>         Abstract cache configuration for IGFS file data to be used as a template.
>>     -->
>>
>>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>         <property name="cacheMode" value="PARTITIONED"/>
>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>         <property name="backups" value="0"/>
>>         <property name="affinityMapper">
>>
>>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>>
>>                 <!-- How many sequential blocks will be stored on the same node. -->
>>                 <constructor-arg value="512"/>
>>             </bean>
>>         </property>
>>     </bean>
>>
>>     <!--
>>
>>         Abstract cache configuration for IGFS metadata to be used as a template.
>>     -->
>>
>>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>>         <property name="cacheMode" value="REPLICATED"/>
>>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>>     </bean>
>>
>>     <!--
>>         Configuration of Ignite node.
>>     -->
>>
>>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>>         <!--
>>             Apache Hadoop Accelerator configuration.
>>         -->
>>         <property name="hadoopConfiguration">
>>
>>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>>
>>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>>                 <property name="finishedJobInfoTtl" value="30000"/>
>>             </bean>
>>         </property>
>>
>>         <!--
>>
>>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>>         -->
>>         <property name="connectorConfiguration">
>>
>>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>>                 <property name="port" value="11211"/>
>>             </bean>
>>         </property>
>>
>>         <!--
>>
>>             Configure one IGFS file system instance named "igfs" on this node.
>>         -->
>>         <property name="fileSystemConfiguration">
>>             <list>
>>
>>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>>                     <property name="name" value="igfs"/>
>>
>>                     <!-- Caches with these names must be configured. -->
>>                     <property name="metaCacheName" value="igfs-meta"/>
>>                     <property name="dataCacheName" value="igfs-data"/>
>>
>>
>>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>>                     <property name="ipcEndpointConfiguration">
>>
>>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>>                             <property name="type" value="TCP" />
>>                             <property name="host" value="0.0.0.0" />
>>                             <property name="port" value="10500" />
>>                         </bean>
>>                     </property>
>>
>>                     <!-- Sample secondary file system configuration.
>>                         'uri'      - the URI of the secondary file system.
>>
>>                         'cfgPath'  - optional configuration path of the secondary file system,
>>
>>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>>
>>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>>
>>                             if Hadoop client and the Ignite node are running on behalf of different users.
>>                     -->
>>                     <property name="secondaryFileSystem">
>>
>>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>>
>>                             <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
>>
>>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>>
>>                             <constructor-arg name="userName" value="client-user-name"/>
>>                         </bean>
>>                     </property>
>>                 </bean>
>>             </list>
>>         </property>
>>
>>         <!--
>>             Caches needed by IGFS.
>>         -->
>>         <property name="cacheConfiguration">
>>             <list>
>>                 <!-- File system metadata cache. -->
>>
>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>>                     <property name="name" value="igfs-meta"/>
>>                 </bean>
>>
>>                 <!-- File system files data cache. -->
>>
>>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>>                     <property name="name" value="igfs-data"/>
>>                 </bean>
>>             </list>
>>         </property>
>>
>>         <!--
>>             Disable events.
>>         -->
>>         <property name="includeEventTypes">
>>             <list>
>>
>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>>
>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>>
>>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>>             </list>
>>         </property>
>>
>>         <!--
>>
>>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>>         -->
>>         <property name="discoverySpi">
>>
>>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>                 <property name="ipFinder">
>>
>>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>>                         <property name="addresses">
>>                             <list>
>>                                 <value>*.*.*.*</value>
>>                                 <value>*.*.*.*:47500..47509</value>
>>                             </list>
>>                         </property>
>>                     </bean>
>>                 </property>
>>             </bean>
>>         </property>
>>     </bean>
>> </beans>
>>
>> ------------------------------
>> liym@runstone.com
>> 北京润通丰华科技有限公司
>> 李宜明 like wind exist
>> 电话 13811682465
>>
>>
>> *From:* Vladimir Ozerov <vo...@gridgain.com>
>> *Date:* 2016-03-17 13:37
>> *To:* user <us...@ignite.apache.org>
>> *Subject:* Re: about mr accelerator question.
>> Hi,
>>
>> The fact that you can work with 29G cluster with only 8G of memory might
>> be
>> caused by the following things:
>> 1) Your job doesn't use all data form cluster and hence caches only part
>> of
>> it. This is the most likely case.
>> 2) You have eviction policy configured for IGFS data cache.
>> 3) Or may be you use offheap.
>> Please provide the full XML configuration and we will be able to
>> understand
>> it.
>>
>> Anyways, your initial question was about out-of-memory. Could you provide
>> exact error message? Is it about heap memory or may be permgen?
>>
>> As per execution time, this depends on your workload. If there are lots
>> map
>> tasks and very active work with data, you will see improvement in speed.
>> If
>> there are lots operations on file system (e.g. mkdirs, move, etc.) and
>> very
>> little amount of map jobs, chances there will be no speedup at all.
>> Provide
>> more details on the job you test and type of data you use and we will be
>> able to give you more ideas on what to do.
>>
>> Vladimir.
>>
>>
>> --
>> View this message in context:
>> http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>>
>

Re: Re: about mr accelerator question.

Posted by "liym@runstone.com" <li...@runstone.com>.
There is a question .now i have 6 ignite nodes.there is a error when the mr task is running.one node is killed usually,can you tell me why.thanks a lot.
on the one node or two node,I dont find this error.

[17:42:52] Security status [authentication=off, tls/ssl=off]
[17:42:53] HADOOP_HOME is set to /home/hduser/hadoop
[17:42:55] Performance suggestions for grid  (fix if possible)
[17:42:55] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[17:42:55]   ^-- Disable grid events (remove 'includeEventTypes' from configuration)
[17:42:55] 
[17:42:55] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[17:42:55] 
[17:42:55] Ignite node started OK (id=7965370b)
[17:42:55] Topology snapshot [ver=1, servers=1, clients=0, CPUs=24, heap=32.0GB]
[17:43:12] Topology snapshot [ver=2, servers=2, clients=0, CPUs=48, heap=64.0GB]
[17:43:18] Topology snapshot [ver=3, servers=3, clients=0, CPUs=72, heap=96.0GB]
[17:43:23] Topology snapshot [ver=4, servers=4, clients=0, CPUs=96, heap=130.0GB]
[17:43:31] Topology snapshot [ver=5, servers=5, clients=0, CPUs=120, heap=160.0GB]
[17:43:38] Topology snapshot [ver=6, servers=6, clients=0, CPUs=144, heap=190.0GB]
[17:44:08] Class "o.a.i.i.processors.hadoop.counter.HadoopCountersImpl" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:09] Class "o.a.i.i.processors.hadoop.jobtracker.HadoopJobMetadata" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:13] Class "o.a.i.i.processors.hadoop.proto.HadoopProtocolTaskArguments" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:34] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleMessage" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
[17:44:36] Class "o.a.i.i.processors.hadoop.shuffle.HadoopShuffleAck" cannot be written in binary format because it either implements Externalizable interface or have writeObject/readObject methods. Please ensure that all nodes have this class in classpath. To enable binary serialization either implement Binarylizable interface or set explicit serializer using BinaryTypeConfiguration.setSerializer() method.
./ignite.sh: line 157: 41326 Killed                  "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
hduser@rslog1-tj:~/ignite/bin$

all nodes have the same config
<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="3400000"/>
    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="300000"/>
            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://202.99.96.170:9000"/>
                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>202.99.96.170</value>
                                <value>202.99.69.170:47500..47509</value>
                                <value>202.99.96.174:47500..47509</value>
                                <value>202.99.96.178:47500..47509</value>
                                <value>202.99.69.174:47500..47509</value>
                                <value>202.99.69.178:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>

and the ignite.sh config is 
#!/bin/bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#
# Grid command line loader.
#

#
# Import common functions.
#
if [ "${IGNITE_HOME}" = "" ];
    then IGNITE_HOME_TMP="$(dirname "$(cd "$(dirname "$0")"; "pwd")")";
    else IGNITE_HOME_TMP=${IGNITE_HOME};
fi

#
# Set SCRIPTS_HOME - base path to scripts.
#
SCRIPTS_HOME="${IGNITE_HOME_TMP}/bin"

source "${SCRIPTS_HOME}"/include/functions.sh

#
# Discover path to Java executable and check it's version.
#
checkJava

#
# Discover IGNITE_HOME environment variable.
#
setIgniteHome

if [ "${DEFAULT_CONFIG}" == "" ]; then
    DEFAULT_CONFIG=config/default-config.xml
fi

#
# Parse command line parameters.
#
. "${SCRIPTS_HOME}"/include/parseargs.sh

#
# Set IGNITE_LIBS.
#
. "${SCRIPTS_HOME}"/include/setenv.sh

CP="${IGNITE_LIBS}"

RANDOM_NUMBER=$("$JAVA" -cp "${CP}" org.apache.ignite.startup.cmdline.CommandLineRandomNumberGenerator)

RESTART_SUCCESS_FILE="${IGNITE_HOME}/work/ignite_success_${RANDOM_NUMBER}"
RESTART_SUCCESS_OPT="-DIGNITE_SUCCESS_FILE=${RESTART_SUCCESS_FILE}"

#
# Find available port for JMX
#
# You can specify IGNITE_JMX_PORT environment variable for overriding automatically found JMX port
#
# This is executed when -nojmx is not specified
#
if [ "${NOJMX}" == "0" ] ; then
    findAvailableJmxPort
fi

# Mac OS specific support to display correct name in the dock.
osname=`uname`

if [ "${DOCK_OPTS}" == "" ]; then
    DOCK_OPTS="-Xdock:name=Ignite Node"
fi

#
# JVM options. See http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp for more details.
#
# ADD YOUR/CHANGE ADDITIONAL OPTIONS HERE
#
if [ -z "$JVM_OPTS" ] ; then
    JVM_OPTS="-Xms32g -Xmx32g -server -XX:+AggressiveOpts -XX:MaxPermSize=16g"
fi

#
# Uncomment the following GC settings if you see spikes in your throughput due to Garbage Collection.
#
# JVM_OPTS="$JVM_OPTS -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m"
# JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=0 -XX:SurvivorRatio=1024 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60"

#
# Uncomment if you get StackOverflowError.
# On 64 bit systems this value can be larger, e.g. -Xss16m
#
# JVM_OPTS="${JVM_OPTS} -Xss4m"

#
# Uncomment to set preference for IPv4 stack.
#
# JVM_OPTS="${JVM_OPTS} -Djava.net.preferIPv4Stack=true"

#
# Assertions are disabled by default since version 3.5.
# If you want to enable them - set 'ENABLE_ASSERTIONS' flag to '1'.
#
ENABLE_ASSERTIONS="0"

#
# Set '-ea' options if assertions are enabled.
#
if [ "${ENABLE_ASSERTIONS}" = "1" ]; then
    JVM_OPTS="${JVM_OPTS} -ea"
fi

#
# Set main class to start service (grid node by default).
#
if [ "${MAIN_CLASS}" = "" ]; then
    MAIN_CLASS=org.apache.ignite.startup.cmdline.CommandLineStartup
fi

#
# Remote debugging (JPDA).
# Uncomment and change if remote debugging is required.
#
# JVM_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8787 ${JVM_OPTS}"

ERRORCODE="-1"

while [ "${ERRORCODE}" -ne "130" ]
do
    if [ "${INTERACTIVE}" == "1" ] ; then
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                 -DIGNITE_HOME="${IGNITE_HOME}" \
                -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS}
            ;;
        esac
    else
        case $osname in
            Darwin*)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${DOCK_OPTS}" "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
            *)
                "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON} \
                  -DIGNITE_HOME="${IGNITE_HOME}" \
                 -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp "${CP}" ${MAIN_CLASS} "${CONFIG}"
            ;;
        esac
    fi

    ERRORCODE="$?"

    if [ ! -f "${RESTART_SUCCESS_FILE}" ] ; then
        break
    else
        rm -f "${RESTART_SUCCESS_FILE}"
    fi
done

if [ -f "${RESTART_SUCCESS_FILE}" ] ; then
    rm -f "${RESTART_SUCCESS_FILE}"
fi

and the log info , it looks like normal.
[18:01:43,142][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:49,041][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:01:55,627][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.2%, GC=0.03%]
    ^-- Heap [used=7976MB, free=75.64%, comm=32750MB]
    ^-- Public thread pool [active=9, idle=39, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:00,667][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,336][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:02,899][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,263][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:04,293][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:06,625][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,346][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:07,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:08,437][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,579][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:09,705][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,008][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,567][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:10,724][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,196][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,669][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:11,753][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:14,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:18,090][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:23,137][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:24,544][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:33,540][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:35,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:42,653][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:53,009][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:54,465][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,141][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:55,635][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=77.57%, avg=73.51%, GC=0%]
    ^-- Heap [used=6619MB, free=79.79%, comm=32751MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:02:56,101][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:57,204][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,582][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,735][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:02:59,862][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:00,842][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:02,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,297][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:03,342][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,299][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,517][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:04,530][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:05,373][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:07,972][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:12,359][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:17,763][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:18,504][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:29,345][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:30,831][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:39,843][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:47,839][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,328][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:51,793][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:52,013][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:53,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,305][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:55,649][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.97%, avg=73.66%, GC=0.03%]
    ^-- Heap [used=11432MB, free=65.09%, comm=32752MB]
    ^-- Public thread pool [active=2, idle=46, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:03:56,413][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:56,738][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:57,375][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:58,477][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:03:59,039][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,414][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,758][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:00,929][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:01,026][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,021][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:02,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:05,905][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:09,631][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,123][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:13,795][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:20,957][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:25,043][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:32,707][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:38,130][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:42,754][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:43,353][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,462][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:45,802][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:46,897][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:47,711][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,592][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:48,780][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:49,813][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:51,222][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,722][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,756][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:52,920][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:53,432][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,444][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:54,752][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:04:55,655][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=78.93%, avg=73.91%, GC=0.03%]
    ^-- Heap [used=6666MB, free=79.64%, comm=32750MB]
    ^-- Public thread pool [active=5, idle=43, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:04:58,114][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:02,277][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:04,174][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:05,688][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:10,720][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:18,086][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:25,302][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:28,170][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:34,389][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:35,207][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,177][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,310][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:38,703][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:39,577][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,491][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-748-0-#303%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:40,597][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-757-0-#312%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:41,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-761-0-#316%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:42,926][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-747-0-#302%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-752-0-#307%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,253][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-754-0-#309%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:45,330][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-746-0-#301%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:46,194][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-745-0-#300%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,102][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-750-0-#305%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:47,845][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-749-0-#304%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:51,186][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-759-0-#314%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:54,757][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-755-0-#310%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:55,654][INFO ][grid-timeout-worker-#97%null%][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
    ^-- Node [id=7965370b, name=null]
    ^-- H/N/C [hosts=6, nodes=6, CPUs=144]
    ^-- CPU [cur=80.17%, avg=74.16%, GC=0.03%]
    ^-- Heap [used=10056MB, free=69.29%, comm=32750MB]
    ^-- Public thread pool [active=3, idle=45, qSize=0]
    ^-- System thread pool [active=0, idle=48, qSize=0]
    ^-- Outbound messages queue [size=0]
[18:05:56,225][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-740-0-#295%null%][CodecPool] Got brand-new decompressor [.gz]
[18:05:58,458][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-762-0-#317%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:01,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-742-0-#297%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:11,590][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-739-0-#294%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,248][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-741-0-#296%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:19,279][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-751-0-#306%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:27,775][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-753-0-#308%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:28,494][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-758-0-#313%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:31,404][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-760-0-#315%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:32,865][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-744-0-#299%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:33,717][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-756-0-#311%null%][CodecPool] Got brand-new decompressor [.gz]
[18:06:34,421][INFO ][Hadoop-task-7965370b-6df2-4abc-ab61-8d77a2cee18a_1-MAP-743-0-#298%null%][CodecPool] Got brand-new decompressor [.gz]




 
From: Vladimir Ozerov
Date: 2016-03-24 21:00
To: user
Subject: Re: Re: about mr accelerator question.
Hi,

Possible speedup greatly depends on the nature of your task. Typically, the more MR tasks you have and the more intensively you work with actual data, the bigger improvement could be achieved. Please give more details on what kind of jobs do you run and probably I will be able to suggest something. 

One possible change you can make to your config - switch temporal file system paths used by your jobs to PRIMARY mode. This way all temp data will reside only in memory and will not hit HDFS.

Vladimir.

On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com> wrote:
I am so glad to tell you the problem has been solved,thanks a lot. but the peformance improve only 300%, is there other good idea for config?
there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been finished.is there good suggestion.

the ignite config is 

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="800000"/>
    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="30000"/>
            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>*.*.*.*</value>
                                <value>*.*.*.*:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>



liym@runstone.com 
北京润通丰华科技有限公司
李宜明 like wind exist
电话 13811682465
 
From: Vladimir Ozerov
Date: 2016-03-17 13:37
To: user
Subject: Re: about mr accelerator question.
Hi,
 
The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.
 
Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?
 
As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.
 
Vladimir.
 
 
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Re: about mr accelerator question.

Posted by Vladimir Ozerov <vo...@gridgain.com>.
Hi,

Possible speedup greatly depends on the nature of your task. Typically, the
more MR tasks you have and the more intensively you work with actual data,
the bigger improvement could be achieved. Please give more details on what
kind of jobs do you run and probably I will be able to suggest something.

One possible change you can make to your config - switch temporal file
system paths used by your jobs to PRIMARY mode. This way all temp data will
reside only in memory and will not hit HDFS.

Vladimir.

On Wed, Mar 23, 2016 at 8:48 AM, liym@runstone.com <li...@runstone.com>
wrote:

> I am so glad to tell you the problem has been solved,thanks a lot. but the
> peformance improve only 300%, is there other good idea for config?
>
> there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been
> finished.is there good suggestion.
>
> the ignite config is
>
> <?xml version="1.0" encoding="UTF-8"?>
>
> <!--
>   Licensed to the Apache Software Foundation (ASF) under one or more
>   contributor license agreements.  See the NOTICE file distributed with
>   this work for additional information regarding copyright ownership.
>   The ASF licenses this file to You under the Apache License, Version 2.0
>   (the "License"); you may not use this file except in compliance with
>   the License.  You may obtain a copy of the License at
>
>        http://www.apache.org/licenses/LICENSE-2.0
>
>   Unless required by applicable law or agreed to in writing, software
>   distributed under the License is distributed on an "AS IS" BASIS,
>   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>   See the License for the specific language governing permissions and
>   limitations under the License.
> -->
>
> <!--
>     Ignite Spring configuration file.
>
>
>     When starting a standalone Ignite node, you need to execute the following command:
>     {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
>
>
>     When starting Ignite from Java IDE, pass path to this file into Ignition:
>     Ignition.start("path-to-this-file/default-config.xml");
> -->
> <beans xmlns="http://www.springframework.org/schema/beans"
>        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="
> http://www.springframework.org/schema/util"
>        xsi:schemaLocation="http://www.springframework.org/schema/beans
>        http://www.springframework.org/schema/beans/spring-beans.xsd
>        http://www.springframework.org/schema/util
>        http://www.springframework.org/schema/util/spring-util.xsd">
>
>     <!--
>         Optional description.
>     -->
>     <description>
>
>         Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
>         Ignite node will start with this configuration by default.
>     </description>
>
>     <!--
>
>         Initialize property configurer so we can reference environment variables.
>     -->
>
>     <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>
>         <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
>         <property name="searchSystemEnvironment" value="true"/>
>     </bean>
>
>     <!--
>         Abstract IGFS file system configuration to be used as a template.
>     -->
>
>     <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
>         <!-- Must correlate with cache affinity mapper. -->
>         <property name="blockSize" value="#{128 * 1024}"/>
>         <property name="perNodeBatchSize" value="512"/>
>         <property name="perNodeParallelBatchCount" value="16"/>
>
>         <property name="prefetchBlocks" value="32"/>
>     </bean>
>
>     <bean class="org.apache.ignite.configuration.CacheConfiguration">
>   <!-- Store cache entries on-heap. -->
>   <property name="memoryMode" value="ONHEAP_TIERED"/>
>
>   <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
>   <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
>   <!-- Configure eviction policy. -->
>   <property name="evictionPolicy">
>
>     <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
>       <!-- Evict to off-heap after cache size reaches maxSize. -->
>       <property name="maxSize" value="800000"/>
>     </bean>
>   </property>
>   </bean>
>
>     <!--
>
>         Abstract cache configuration for IGFS file data to be used as a template.
>     -->
>
>     <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>         <property name="cacheMode" value="PARTITIONED"/>
>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>         <property name="backups" value="0"/>
>         <property name="affinityMapper">
>
>             <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
>
>                 <!-- How many sequential blocks will be stored on the same node. -->
>                 <constructor-arg value="512"/>
>             </bean>
>         </property>
>     </bean>
>
>     <!--
>
>         Abstract cache configuration for IGFS metadata to be used as a template.
>     -->
>
>     <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>         <property name="cacheMode" value="REPLICATED"/>
>         <property name="atomicityMode" value="TRANSACTIONAL"/>
>         <property name="writeSynchronizationMode" value="FULL_SYNC"/>
>     </bean>
>
>     <!--
>         Configuration of Ignite node.
>     -->
>
>     <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
>         <!--
>             Apache Hadoop Accelerator configuration.
>         -->
>         <property name="hadoopConfiguration">
>
>             <bean class="org.apache.ignite.configuration.HadoopConfiguration">
>
>                 <!-- Information about finished jobs will be kept for 30 seconds. -->
>                 <property name="finishedJobInfoTtl" value="30000"/>
>             </bean>
>         </property>
>
>         <!--
>
>             This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
>         -->
>         <property name="connectorConfiguration">
>
>             <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
>                 <property name="port" value="11211"/>
>             </bean>
>         </property>
>
>         <!--
>
>             Configure one IGFS file system instance named "igfs" on this node.
>         -->
>         <property name="fileSystemConfiguration">
>             <list>
>
>                 <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
>                     <property name="name" value="igfs"/>
>
>                     <!-- Caches with these names must be configured. -->
>                     <property name="metaCacheName" value="igfs-meta"/>
>                     <property name="dataCacheName" value="igfs-data"/>
>
>
>                     <!-- Configure TCP endpoint for communication with the file system instance. -->
>                     <property name="ipcEndpointConfiguration">
>
>                         <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
>                             <property name="type" value="TCP" />
>                             <property name="host" value="0.0.0.0" />
>                             <property name="port" value="10500" />
>                         </bean>
>                     </property>
>
>                     <!-- Sample secondary file system configuration.
>                         'uri'      - the URI of the secondary file system.
>
>                         'cfgPath'  - optional configuration path of the secondary file system,
>
>                             e.g. /opt/foo/core-site.xml. Typically left to be null.
>
>                         'userName' - optional user name to access the secondary file system on behalf of. Use it
>
>                             if Hadoop client and the Ignite node are running on behalf of different users.
>                     -->
>                     <property name="secondaryFileSystem">
>
>                         <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
>
>                             <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
>
>                             <constructor-arg name="cfgPath"><null/></constructor-arg>
>
>                             <constructor-arg name="userName" value="client-user-name"/>
>                         </bean>
>                     </property>
>                 </bean>
>             </list>
>         </property>
>
>         <!--
>             Caches needed by IGFS.
>         -->
>         <property name="cacheConfiguration">
>             <list>
>                 <!-- File system metadata cache. -->
>
>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
>                     <property name="name" value="igfs-meta"/>
>                 </bean>
>
>                 <!-- File system files data cache. -->
>
>                 <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
>                     <property name="name" value="igfs-data"/>
>                 </bean>
>             </list>
>         </property>
>
>         <!--
>             Disable events.
>         -->
>         <property name="includeEventTypes">
>             <list>
>
>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
>
>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
>
>                 <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
>             </list>
>         </property>
>
>         <!--
>
>             TCP discovery SPI can be configured with list of addresses if multicast is not available.
>         -->
>         <property name="discoverySpi">
>
>             <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>                 <property name="ipFinder">
>
>                     <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>                         <property name="addresses">
>                             <list>
>                                 <value>*.*.*.*</value>
>                                 <value>*.*.*.*:47500..47509</value>
>                             </list>
>                         </property>
>                     </bean>
>                 </property>
>             </bean>
>         </property>
>     </bean>
> </beans>
>
> ------------------------------
> liym@runstone.com
> 北京润通丰华科技有限公司
> 李宜明 like wind exist
> 电话 13811682465
>
>
> *From:* Vladimir Ozerov <vo...@gridgain.com>
> *Date:* 2016-03-17 13:37
> *To:* user <us...@ignite.apache.org>
> *Subject:* Re: about mr accelerator question.
> Hi,
>
> The fact that you can work with 29G cluster with only 8G of memory might be
> caused by the following things:
> 1) Your job doesn't use all data form cluster and hence caches only part of
> it. This is the most likely case.
> 2) You have eviction policy configured for IGFS data cache.
> 3) Or may be you use offheap.
> Please provide the full XML configuration and we will be able to understand
> it.
>
> Anyways, your initial question was about out-of-memory. Could you provide
> exact error message? Is it about heap memory or may be permgen?
>
> As per execution time, this depends on your workload. If there are lots map
> tasks and very active work with data, you will see improvement in speed. If
> there are lots operations on file system (e.g. mkdirs, move, etc.) and very
> little amount of map jobs, chances there will be no speedup at all. Provide
> more details on the job you test and type of data you use and we will be
> able to give you more ideas on what to do.
>
> Vladimir.
>
>
> --
> View this message in context:
> http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>

Re: Re: about mr accelerator question.

Posted by "liym@runstone.com" <li...@runstone.com>.
I am so glad to tell you the problem has been solved,thanks a lot. but the peformance improve only 300%, is there other good idea for config?
there is another problem is i am not able to follow the track the job like use framework yarn.so I  cant count the jobs and view the state which I have been finished.is there good suggestion.

the ignite config is 

<?xml version="1.0" encoding="UTF-8"?>

<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->

<!--
    Ignite Spring configuration file.

    When starting a standalone Ignite node, you need to execute the following command:
    {IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml

    When starting Ignite from Java IDE, pass path to this file into Ignition:
    Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="http://www.springframework.org/schema/util"
       xsi:schemaLocation="http://www.springframework.org/schema/beans
       http://www.springframework.org/schema/beans/spring-beans.xsd
       http://www.springframework.org/schema/util
       http://www.springframework.org/schema/util/spring-util.xsd">

    <!--
        Optional description.
    -->
    <description>
        Spring file for Ignite node configuration with IGFS and Apache Hadoop map-reduce support enabled.
        Ignite node will start with this configuration by default.
    </description>

    <!--
        Initialize property configurer so we can reference environment variables.
    -->
    <bean id="propertyConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
        <property name="systemPropertiesModeName" value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
        <property name="searchSystemEnvironment" value="true"/>
    </bean>

    <!--
        Abstract IGFS file system configuration to be used as a template.
    -->
    <bean id="igfsCfgBase" class="org.apache.ignite.configuration.FileSystemConfiguration" abstract="true">
        <!-- Must correlate with cache affinity mapper. -->
        <property name="blockSize" value="#{128 * 1024}"/>
        <property name="perNodeBatchSize" value="512"/>
        <property name="perNodeParallelBatchCount" value="16"/>

        <property name="prefetchBlocks" value="32"/>
    </bean>

    <bean class="org.apache.ignite.configuration.CacheConfiguration">
  <!-- Store cache entries on-heap. -->
  <property name="memoryMode" value="ONHEAP_TIERED"/> 
  <!-- Enable Off-Heap memory with max size of 10 Gigabytes (0 for unlimited). -->
  <property name="offHeapMaxMemory" value="#{14 * 1024L * 1024L * 1024L}"/>
  <!-- Configure eviction policy. -->
  <property name="evictionPolicy">
    <bean class="org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy">
      <!-- Evict to off-heap after cache size reaches maxSize. -->
      <property name="maxSize" value="800000"/>
    </bean>
  </property>
  </bean>

    <!--
        Abstract cache configuration for IGFS file data to be used as a template.
    -->
    <bean id="dataCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="PARTITIONED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
        <property name="backups" value="0"/>
        <property name="affinityMapper">
            <bean class="org.apache.ignite.igfs.IgfsGroupDataBlocksKeyMapper">
                <!-- How many sequential blocks will be stored on the same node. -->
                <constructor-arg value="512"/>
            </bean>
        </property>
    </bean>

    <!--
        Abstract cache configuration for IGFS metadata to be used as a template.
    -->
    <bean id="metaCacheCfgBase" class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
        <property name="cacheMode" value="REPLICATED"/>
        <property name="atomicityMode" value="TRANSACTIONAL"/>
        <property name="writeSynchronizationMode" value="FULL_SYNC"/>
    </bean>

    <!--
        Configuration of Ignite node.
    -->
    <bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
        <!--
            Apache Hadoop Accelerator configuration.
        -->
        <property name="hadoopConfiguration">
            <bean class="org.apache.ignite.configuration.HadoopConfiguration">
                <!-- Information about finished jobs will be kept for 30 seconds. -->
                <property name="finishedJobInfoTtl" value="30000"/>
            </bean>
        </property>

        <!--
            This port will be used by Apache Hadoop client to connect to Ignite node as if it was a job tracker.
        -->
        <property name="connectorConfiguration">
            <bean class="org.apache.ignite.configuration.ConnectorConfiguration">
                <property name="port" value="11211"/>
            </bean>
        </property>

        <!--
            Configure one IGFS file system instance named "igfs" on this node.
        -->
        <property name="fileSystemConfiguration">
            <list>
                <bean class="org.apache.ignite.configuration.FileSystemConfiguration" parent="igfsCfgBase">
                    <property name="name" value="igfs"/>

                    <!-- Caches with these names must be configured. -->
                    <property name="metaCacheName" value="igfs-meta"/>
                    <property name="dataCacheName" value="igfs-data"/>

                    <!-- Configure TCP endpoint for communication with the file system instance. -->
                    <property name="ipcEndpointConfiguration">
                        <bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
                            <property name="type" value="TCP" />
                            <property name="host" value="0.0.0.0" />
                            <property name="port" value="10500" />
                        </bean>
                    </property>

                    <!-- Sample secondary file system configuration.
                        'uri'      - the URI of the secondary file system.
                        'cfgPath'  - optional configuration path of the secondary file system,
                            e.g. /opt/foo/core-site.xml. Typically left to be null.
                        'userName' - optional user name to access the secondary file system on behalf of. Use it
                            if Hadoop client and the Ignite node are running on behalf of different users.
                    -->
                    <property name="secondaryFileSystem">
                        <bean class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
                            <constructor-arg name="uri" value="hdfs://*.*.*.*:9000"/>
                            <constructor-arg name="cfgPath"><null/></constructor-arg>
                            <constructor-arg name="userName" value="client-user-name"/>
                        </bean>
                    </property>
                </bean>
            </list>
        </property>

        <!--
            Caches needed by IGFS.
        -->
        <property name="cacheConfiguration">
            <list>
                <!-- File system metadata cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="metaCacheCfgBase">
                    <property name="name" value="igfs-meta"/>
                </bean>

                <!-- File system files data cache. -->
                <bean class="org.apache.ignite.configuration.CacheConfiguration" parent="dataCacheCfgBase">
                    <property name="name" value="igfs-data"/>
                </bean>
            </list>
        </property>

        <!--
            Disable events.
        -->
        <property name="includeEventTypes">
            <list>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
                <util:constant static-field="org.apache.ignite.events.EventType.EVT_JOB_MAPPED"/>
            </list>
        </property>

        <!--
            TCP discovery SPI can be configured with list of addresses if multicast is not available.
        -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
                        <property name="addresses">
                            <list>
                                <value>*.*.*.*</value>
                                <value>*.*.*.*:47500..47509</value>
                            </list>
                        </property>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>



liym@runstone.com 
北京润通丰华科技有限公司
李宜明 like wind exist
电话 13811682465
 
From: Vladimir Ozerov
Date: 2016-03-17 13:37
To: user
Subject: Re: about mr accelerator question.
Hi,
 
The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.
 
Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?
 
As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.
 
Vladimir.
 
 
--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: about mr accelerator question.

Posted by Vladimir Ozerov <vo...@gridgain.com>.
Hi,

The fact that you can work with 29G cluster with only 8G of memory might be
caused by the following things:
1) Your job doesn't use all data form cluster and hence caches only part of
it. This is the most likely case.
2) You have eviction policy configured for IGFS data cache. 
3) Or may be you use offheap.
Please provide the full XML configuration and we will be able to understand
it.

Anyways, your initial question was about out-of-memory. Could you provide
exact error message? Is it about heap memory or may be permgen?

As per execution time, this depends on your workload. If there are lots map
tasks and very active work with data, you will see improvement in speed. If
there are lots operations on file system (e.g. mkdirs, move, etc.) and very
little amount of map jobs, chances there will be no speedup at all. Provide
more details on the job you test and type of data you use and we will be
able to give you more ideas on what to do.

Vladimir.



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-mr-accelerator-question-tp3502p3552.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.