You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@ignite.apache.org by Andrey Mashenkov <an...@gmail.com> on 2017/04/05 15:21:10 UTC

Re: Error in executing hadoop job using ignite

Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscribe@ignite.apache.org and follow simple instructions in
the reply.


It looks like you have outdated "objectweb-asm" jar library in classpath.


You wrote:

Hi,
I am using ignite hadoop accelerator and hdfs as secondry file system. But
when I submit job using ignite configuration it show following error.
Please tell if you feel anything wrong.
]$ hadoop --config ~/ignite_conf jar
/app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar
wordcount /PrashantSingh/1184-0.txt /output4tyy
Apr 04, 2017 3:33:30 PM
org.apache.ignite.internal.client.impl.connection.GridClientNioTcpConnection
<init>
INFO: Client TCP connection established: hmaster/10.202.17.60:11211
Apr 04, 2017 3:33:30 PM
org.apache.ignite.internal.client.impl.GridClientImpl <init>
INFO: Client started [id=1ce64156-1137-4a77-bed3-d32b962ce3c4, protocol=TCP]
2017-04-04 15:33:31,688 INFO [main] input.FileInputFormat
(FileInputFormat.java:listStatus(283)) - Total input paths to process : 1
2017-04-04 15:33:32,202 INFO [main] mapreduce.JobSubmitter
(JobSubmitter.java:submitJobInternal(198)) - number of splits:1
2017-04-04 15:33:33,222 INFO [main] mapreduce.JobSubmitter
(JobSubmitter.java:printTokens(287)) - Submitting tokens for job:
job_6f75490d-9038-43af-93ba-3d06081f65d2_0002
2017-04-04 15:33:33,445 INFO [main] mapreduce.Job (Job.java:submit(1294)) -
The url to track the job: N/A
2017-04-04 15:33:33,447 INFO [main] mapreduce.Job
(Job.java:monitorAndPrintJob(1339)) - Running job:
job_6f75490d-9038-43af-93ba-3d06081f65d2_0002
<b>java.io.IOException: Job tracker doesn't have any information about the
job: job_6f75490d-9038-43af-93ba-3d06081f65d2_0002</b>
at
org.apache.ignite.internal.processors.hadoop.impl.proto.HadoopClientProtocol.getJobStatus(HadoopClientProtocol.java:192)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:323)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:320)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.mapreduce.Job.updateStatus(Job.java:320)
at org.apache.hadoop.mapreduce.Job.isComplete(Job.java:604)
at org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1349)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1311)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)


and at ignite node it shows following error
[15:33:33,408][ERROR][pub-#117%null%][HadoopJobTracker] Failed to submit
job: 6f75490d-9038-43af-93ba-3d06081f65d2_2
class org.apache.ignite.IgniteCheckedException: class
org.apache.ignite.IgniteException: null
at
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2JobResourceManager.prepareJobEnvironment(HadoopV2JobResourceManager.java:169)
at
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job.initialize(HadoopV2Job.java:319)
at
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.job(HadoopJobTracker.java:1123)
at
org.apache.ignite.internal.processors.hadoop.jobtracker.HadoopJobTracker.submit(HadoopJobTracker.java:313)
at
org.apache.ignite.internal.processors.hadoop.HadoopProcessor.submit(HadoopProcessor.java:173)
at
org.apache.ignite.internal.processors.hadoop.HadoopImpl.submit(HadoopImpl.java:69)
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolSubmitJobTask.run(HadoopProtocolSubmitJobTask.java:50)
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolSubmitJobTask.run(HadoopProtocolSubmitJobTask.java:33)
at
org.apache.ignite.internal.processors.hadoop.proto.HadoopProtocolTaskAdapter$Job.execute(HadoopProtocolTaskAdapter.java:101)
at
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:560)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6618)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:554)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:483)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: class org.apache.ignite.IgniteException:
null
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopFileSystemCacheUtils.fileSystemForMrUserWithCaching(HadoopFileSystemCacheUtils.java:118)
at
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2Job.fileSystem(HadoopV2Job.java:463)
at
org.apache.ignite.internal.processors.hadoop.impl.v2.HadoopV2JobResourceManager.prepareJobEnvironment(HadoopV2JobResourceManager.java:134)
... 16 more
Caused by: class org.apache.ignite.IgniteException: null
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:100)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopFileSystemCacheUtils.fileSystemForMrUserWithCaching(HadoopFileSystemCacheUtils.java:115)
... 18 more
Caused by: class org.apache.ignite.IgniteCheckedException: null
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7239)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:170)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:119)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.getValue(HadoopLazyConcurrentMap.java:191)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:93)
... 19 more
Caused by: java.lang.IllegalArgumentException
at org.objectweb.asm.ClassReader.<init>(Unknown Source)
at org.objectweb.asm.ClassReader.<init>(Unknown Source)
at org.objectweb.asm.ClassReader.<init>(Unknown Source)
at
org.apache.ignite.internal.processors.hadoop.HadoopHelperImpl.loadReplace(HadoopHelperImpl.java:93)
at
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.loadReplace(HadoopClassLoader.java:331)
at
org.apache.ignite.internal.processors.hadoop.HadoopClassLoader.loadClass(HadoopClassLoader.java:290)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.hadoop.tracing.SpanReceiverHost.get(SpanReceiverHost.java:79)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:634)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:162)
at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:159)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:159)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopFileSystemCacheUtils$1.createValue(HadoopFileSystemCacheUtils.java:59)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopFileSystemCacheUtils$1.createValue(HadoopFileSystemCacheUtils.java:42)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.init(HadoopLazyConcurrentMap.java:173)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap$ValueWrapper.access$100(HadoopLazyConcurrentMap.java:154)
at
org.apache.ignite.internal.processors.hadoop.impl.fs.HadoopLazyConcurrentMap.getOrCreate(HadoopLazyConcurrentMap.java:82)
... 19 more



This is my default-config.xml file
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<!--
Ignite Spring configuration file.
When starting a standalone Ignite node, you need to execute the following
command:
{IGNITE_HOME}/bin/ignite.{bat|sh} path-to-this-file/default-config.xml
When starting Ignite from Java IDE, pass path to this file into Ignition:
Ignition.start("path-to-this-file/default-config.xml");
-->
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:util="
http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<!--
Optional description.
-->
<description>
Spring file for Ignite node configuration with IGFS and Apache Hadoop
map-reduce support enabled.
Ignite node will start with this configuration by default.
</description>
<!--
Initialize property configurer so we can reference environment variables.
-->
<bean id="propertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="systemPropertiesModeName"
value="SYSTEM_PROPERTIES_MODE_FALLBACK"/>
<property name="searchSystemEnvironment" value="true"/>
</bean>
<!--
Configuration of Ignite node.
-->
<bean id="grid.cfg"
class="org.apache.ignite.configuration.IgniteConfiguration">
<!--
Configure caches where IGFS will store data.
-->
<property name="cacheConfiguration">
<list>
<!--
Configure metadata cache where file system structure will be stored. It
must be TRANSACTIONAL,
and must have backups to maintain file system consistency in case of node
crash.
-->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="igfs-meta"/>
<property name="cacheMode" value="REPLICATED"/>
<property name="atomicityMode" value="TRANSACTIONAL"/>
</bean>
<!--
Configure data cache where file's data will be stored.
-->
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="igfs-data"/>
<property name="atomicityMode" value="TRANSACTIONAL"/>
</bean>
</list>
</property>
<!--
This port will be used by Apache Hadoop client to connect to Ignite node as
if it was a job tracker.
-->
<property name="connectorConfiguration">
<bean class="org.apache.ignite.configuration.ConnectorConfiguration">
<property name="port" value="11211"/>
</bean>
</property>
<!--
Configure one IGFS file system instance named "igfs" on this node.
-->
<property name="fileSystemConfiguration">
<list>
<bean class="org.apache.ignite.configuration.FileSystemConfiguration">
<!-- IGFS name you will use to access IGFS through Hadoop API. -->
<property name="name" value="igfs"/>
<!-- Caches with these names must be configured. -->
<property name="metaCacheName" value="igfs-meta"/>
<property name="dataCacheName" value="igfs-data"/>
<!-- Configure TCP endpoint for communication with the file system
instance. -->
<property name="ipcEndpointConfiguration">
<bean class="org.apache.ignite.igfs.IgfsIpcEndpointConfiguration">
<property name="type" value="TCP" />
<property name="host" value="0.0.0.0" />
<property name="port" value="10500" />
</bean>
</property>
<!--
Configure secondary file system if needed.
-->
<property name="secondaryFileSystem">
<bean
class="org.apache.ignite.hadoop.fs.IgniteHadoopIgfsSecondaryFileSystem">
<property name="fileSystemFactory">
<bean class="org.apache.ignite.hadoop.fs.CachingHadoopFileSystemFactory">
<property name="uri" value="hdfs://hmaster:9000/"/>
</bean>
</property>
</bean>
</property>
</bean>
</list>
</property>
<!--
TCP discovery SPI can be configured with list of addresses if multicast is
not available.
-->
<!--
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean
class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<value>127.0.0.1:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
-->
</bean>
</beans>

Configuration files inside ignite_conf directory is attached.core-site.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/n11704/core-site.xml>
hdfs-site.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/n11704/hdfs-site.xml>
hive-site.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/n11704/hive-site.xml>
mapred-site.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/n11704/mapred-site.xml>



-- 
Best regards,
Andrey V. Mashenkov