You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@giraph.apache.org by Gil Tselenchuk <gi...@gmail.com> on 2012/12/22 14:09:42 UTC

giraph - problem running shortest path example

 hello friend,

I have a problem running giraph example "shortest path" over hadoop.
I install:
1. debian OS.
2. hadoop (version 1.0.3) single node, that run "word count" on it
(successfully)
3. maven 3.0.4
4. and I use the inputs from the Apache Giraph site.

*And when I run the example, there is no output, only logs as follow.*
*What can I do??*

Thanks, Gil


*Terminal output:*

hduser@beb-1:/usr/local/giraph-trunk$ *hadoop jar
/home/hduser/Desktop/giraph.jar org.apache.giraph.GiraphRunner
org.apache.giraph.examples.SimpleShortestPathsVertex -if
org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat -ip
/user/hduser/shortestPathsInputGraph/ -of
org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat -op
shortestPathsOutputGraph20 -w 3*

12/12/05 16:27:33 INFO graph.GiraphJob: run: Since checkpointing is
disabled (default), do not allow any task retries (setting
mapred.map.max.attempts = 0, old value = 4)
12/12/05 16:27:34 INFO mapred.JobClient: Running job: job_201212051558_0002
12/12/05 16:27:35 INFO mapred.JobClient:  map 0% reduce 0%
12/12/05 16:27:52 INFO mapred.JobClient:  map 25% reduce 0%
12/12/05 16:27:55 INFO mapred.JobClient:  map 50% reduce 0%
12/12/05 16:28:01 INFO mapred.JobClient:  map 75% reduce 0%
12/12/05 16:38:36 INFO mapred.JobClient:  map 50% reduce 0%
12/12/05 16:38:41 INFO mapred.JobClient: Job complete: job_201212051558_0002
12/12/05 16:38:41 INFO mapred.JobClient: Counters: 6
12/12/05 16:38:41 INFO mapred.JobClient:   Job Counters
12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1947284
12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0
12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0
12/12/05 16:38:41 INFO mapred.JobClient:     Launched map tasks=4
12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
12/12/05 16:38:41 INFO mapred.JobClient:     Failed map tasks=1




-------------------------------------
attempt_201212051558_0002_m_000000_0
task_201212051558_0002_m_000000<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000000>
beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED

java.lang.Throwable: Child Error
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


attempt_201212051558_0002_m_000001_0task_201212051558_0002_m_000001<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000001>
beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED

java.lang.IllegalStateException: run: Caught an unrecoverable
exception exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:768)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:396)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.IllegalStateException: exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
	at org.apache.giraph.zk.ZooKeeperExt.exists(ZooKeeperExt.java:369)
	at org.apache.giraph.graph.BspServiceWorker.startSuperstep(BspServiceWorker.java:653)
	at org.apache.giraph.graph.BspServiceWorker.setup(BspServiceWorker.java:452)
	at org.apache.giraph.graph.GraphMapper.map(GraphMapper.java:540)
	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:739)
	... 7 more

-------
Task attempt_201212051558_0002_m_000001_0 failed to report status for
602 seconds. Killing!



shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
/_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
/history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
/job_201212051558_0002_1354717653975_hduser_Giraph%3A+org.apache.giraph.examples.SimpleShortestP

Meta VERSION="1" .
Job JOBID="job_201212051558_0002" JOBNAME="Giraph:
org\.apache\.giraph\.examples\.SimpleShortestPathsVertex" USER="hduser"
SUBMIT_TIME="1354717653975"
JOBCONF="hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/\.staging/job_201212051558_0002/job\.xml"
VIEW_JOB="*" MODIFY_JOB="*" JOB_QUEUE="default" .
Job JOBID="job_201212051558_0002" JOB_PRIORITY="NORMAL" .
Job JOBID="job_201212051558_0002" LAUNCH_TIME="1354717654079"
TOTAL_MAPS="4" TOTAL_REDUCES="0" JOB_STATUS="PREP" .
Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
START_TIME="1354717655256" SPLITS="" .
MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
START_TIME="1354717655984"
TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
HTTP_PORT="50060" .
MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
TASK_STATUS="SUCCESS" FINISH_TIME="1354717659857"
HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="setup"
COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
Records)(0)][(CPU_MILLISECONDS)(CPU time spent
\\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
\\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
snapshot)(493641728)]}" .
Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
TASK_STATUS="SUCCESS" FINISH_TIME="1354717661263"
COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
Records)(0)][(CPU_MILLISECONDS)(CPU time spent
\\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
\\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
snapshot)(493641728)]}" .
Job JOBID="job_201212051558_0002" JOB_STATUS="RUNNING" .
Task TASKID="task_201212051558_0002_m_000000" TASK_TYPE="MAP"
START_TIME="1354717661265" SPLITS="" .
Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
START_TIME="1354717664272" SPLITS="" .
Task TASKID="task_201212051558_0002_m_000002" TASK_TYPE="MAP"
START_TIME="1354717667275" SPLITS="" .
Task TASKID="task_201212051558_0002_m_000003" TASK_TYPE="MAP"
START_TIME="1354717670282" SPLITS="" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0"
START_TIME="1354717661271"
TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
HTTP_PORT="50060" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0" TASK_STATUS="FAILED"
FINISH_TIME="1354717675621" HOSTNAME="beb-1\.bgu\.ac\.il"
ERROR="java\.lang\.Throwable: Child Error
 at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:271)
Caused by: java\.io\.IOException: Task process exit with nonzero status of
1\.
 at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:258)
" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0"
START_TIME="1354717664274"
TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
HTTP_PORT="50060" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0" TASK_STATUS="FAILED"
FINISH_TIME="1354718312785" HOSTNAME="beb-1\.bgu\.ac\.il"
ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
exception exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
 at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
 at java\.security\.AccessController\.doPrivileged(Native Method)
at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
 at
org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
Caused by: java\.lang\.IllegalStateException: exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
at
org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
 at
org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
\.\.\. 7 more
,Task attempt_201212051558_0002_m_000001_0 failed to report status for 602
seconds\. Killing!" .
Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
TASK_STATUS="FAILED" FINISH_TIME="1354718312785"
ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
exception exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
 at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
 at java\.security\.AccessController\.doPrivileged(Native Method)
at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
 at
org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
Caused by: java\.lang\.IllegalStateException: exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
at
org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
 at
org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
\.\.\. 7 more
,Task attempt_201212051558_0002_m_000001_0 failed to report status for 602
seconds\. Killing!" TASK_ATTEMPT_ID="" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0"
START_TIME="1354717670284"
TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
HTTP_PORT="50060" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0" TASK_STATUS="FAILED"
FINISH_TIME="1354718312799" HOSTNAME="beb-1\.bgu\.ac\.il"
ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
exception exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
 at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
 at java\.security\.AccessController\.doPrivileged(Native Method)
at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
 at
org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
Caused by: java\.lang\.IllegalStateException: exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
at
org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
 at
org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
\.\.\. 7 more
,Task attempt_201212051558_0002_m_000003_0 failed to report status for 602
seconds\. Killing!" .
Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
START_TIME="1354718315782" SPLITS="" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0"
START_TIME="1354717667278"
TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
HTTP_PORT="50060" .
MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0" TASK_STATUS="FAILED"
FINISH_TIME="1354718315788" HOSTNAME="beb-1\.bgu\.ac\.il"
ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
exception exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
 at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
 at java\.security\.AccessController\.doPrivileged(Native Method)
at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
 at
org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
Caused by: java\.lang\.IllegalStateException: exists: Failed to check
/_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
after 3 tries!
 at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
at
org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
 at
org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
 at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
\.\.\. 7 more
,Task attempt_201212051558_0002_m_000002_0 failed to report status for 602
seconds\. Killing!" .
MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
START_TIME="1354718315790"
TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
HTTP_PORT="50060" .
MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
TASK_STATUS="SUCCESS" FINISH_TIME="1354718319665"
HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="cleanup"
COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
Records)(0)][(CPU_MILLISECONDS)(CPU time spent
\\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
\\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
snapshot)(565895168)]}" .
Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
TASK_STATUS="SUCCESS" FINISH_TIME="1354718321788"
COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
Records)(0)][(CPU_MILLISECONDS)(CPU time spent
\\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
\\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
snapshot)(565895168)]}" .
Job JOBID="job_201212051558_0002" FINISH_TIME="1354718321789"
JOB_STATUS="FAILED" FINISHED_MAPS="0" FINISHED_REDUCES="0" FAIL_REASON="#
of failed Map Tasks exceeded allowed limit\. FailedCount: 1\.
LastFailedTask: task_201212051558_0002_m_000001" .

shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
/_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
/history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
/job_201212051558_0002_conf.xml

<?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
<property><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
<property><name>mapred.task.cache.levels</name><value>2</value></property>
<property><name>giraph.vertexOutputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat</value></property>
<property><name>hadoop.tmp.dir</name><value>/app/hadoop/tmp</value></property>
<property><name>hadoop.native.lib</name><value>true</value></property>
<property><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value></property>
<property><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value></property>
<property><name>dfs.https.need.client.auth</name><value>false</value></property>
<property><name>ipc.client.idlethreshold</name><value>4000</value></property>
<property><name>dfs.datanode.data.dir.perm</name><value>755</value></property>
<property><name>mapred.system.dir</name><value>${hadoop.tmp.dir}/mapred/system</value></property>
<property><name>mapred.job.tracker.persist.jobstatus.hours</name><value>0</value></property>
<property><name>dfs.datanode.address</name><value>0.0.0.0:50010
</value></property>
<property><name>dfs.namenode.logging.level</name><value>info</value></property>
<property><name>dfs.block.access.token.enable</name><value>false</value></property>
<property><name>io.skip.checksum.errors</name><value>false</value></property>
<property><name>fs.default.name
</name><value>hdfs://localhost:54310</value></property>
<property><name>mapred.cluster.reduce.memory.mb</name><value>-1</value></property>
<property><name>mapred.child.tmp</name><value>./tmp</value></property>
<property><name>fs.har.impl.disable.cache</name><value>true</value></property>
<property><name>dfs.safemode.threshold.pct</name><value>0.999f</value></property>
<property><name>mapred.skip.reduce.max.skip.groups</name><value>0</value></property>
<property><name>dfs.namenode.handler.count</name><value>10</value></property>
<property><name>dfs.blockreport.initialDelay</name><value>0</value></property>
<property><name>mapred.heartbeats.in.second</name><value>100</value></property>
<property><name>mapred.tasktracker.dns.nameserver</name><value>default</value></property>
<property><name>io.sort.factor</name><value>10</value></property>
<property><name>mapred.task.timeout</name><value>600000</value></property>
<property><name>giraph.maxWorkers</name><value>3</value></property>
<property><name>mapred.max.tracker.failures</name><value>4</value></property>
<property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
<property><name>mapred.job.tracker.jobhistory.lru.cache.size</name><value>5</value></property>
<property><name>fs.hdfs.impl</name><value>org.apache.hadoop.hdfs.DistributedFileSystem</value></property>
<property><name>mapred.queue.default.acl-administer-jobs</name><value>*</value></property>
<property><name>dfs.block.access.key.update.interval</name><value>600</value></property>
<property><name>mapred.skip.map.auto.incr.proc.count</name><value>true</value></property>
<property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>true</value></property>
<property><name>io.mapfile.bloom.size</name><value>1048576</value></property>
<property><name>mapreduce.reduce.shuffle.connect.timeout</name><value>180000</value></property>
<property><name>dfs.safemode.extension</name><value>30000</value></property>
<property><name>mapred.jobtracker.blacklist.fault-timeout-window</name><value>180</value></property>
<property><name>tasktracker.http.threads</name><value>40</value></property>
<property><name>mapred.job.shuffle.merge.percent</name><value>0.66</value></property>
<property><name>mapreduce.inputformat.class</name><value>org.apache.giraph.bsp.BspInputFormat</value></property>
<property><name>fs.ftp.impl</name><value>org.apache.hadoop.fs.ftp.FTPFileSystem</value></property>
<property><name>user.name</name><value>hduser</value></property>
<property><name>mapred.output.compress</name><value>false</value></property>
<property><name>io.bytes.per.checksum</name><value>512</value></property>
<property><name>mapred.combine.recordsBeforeProgress</name><value>10000</value></property>
<property><name>mapred.healthChecker.script.timeout</name><value>600000</value></property>
<property><name>topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value></property>
<property><name>dfs.https.server.keystore.resource</name><value>ssl-server.xml</value></property>
<property><name>mapred.reduce.slowstart.completed.maps</name><value>0.05</value></property>
<property><name>mapred.reduce.max.attempts</name><value>4</value></property>
<property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
<property><name>dfs.block.access.token.lifetime</name><value>600</value></property>
<property><name>dfs.name.edits.dir</name><value>${dfs.name.dir}</value></property>
<property><name>mapred.skip.map.max.skip.records</name><value>0</value></property>
<property><name>mapred.cluster.map.memory.mb</name><value>-1</value></property>
<property><name>hadoop.security.group.mapping</name><value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value></property>
<property><name>mapred.job.tracker.persist.jobstatus.dir</name><value>/jobtracker/jobsInfo</value></property>
<property><name>mapred.jar</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002/job.jar</value></property>
<property><name>dfs.block.size</name><value>67108864</value></property>
<property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
<property><name>job.end.retry.attempts</name><value>0</value></property>
<property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
<property><name>mapred.local.dir.minspacestart</name><value>0</value></property>
<property><name>mapred.output.compression.type</name><value>RECORD</value></property>
<property><name>dfs.datanode.ipc.address</name><value>0.0.0.0:50020
</value></property>
<property><name>dfs.permissions</name><value>true</value></property>
<property><name>topology.script.number.args</name><value>100</value></property>
<property><name>io.mapfile.bloom.error.rate</name><value>0.005</value></property>
<property><name>mapred.cluster.max.reduce.memory.mb</name><value>-1</value></property>
<property><name>mapred.max.tracker.blacklists</name><value>4</value></property>
<property><name>mapred.task.profile.maps</name><value>0-2</value></property>
<property><name>dfs.datanode.https.address</name><value>0.0.0.0:50475
</value></property>
<property><name>mapred.userlog.retain.hours</name><value>24</value></property>
<property><name>dfs.secondary.http.address</name><value>0.0.0.0:50090
</value></property>
<property><name>dfs.replication.max</name><value>512</value></property>
<property><name>mapred.job.tracker.persist.jobstatus.active</name><value>false</value></property>
<property><name>hadoop.security.authorization</name><value>false</value></property>
<property><name>local.cache.size</name><value>10737418240</value></property>
<property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value></property>
<property><name>mapred.min.split.size</name><value>0</value></property>
<property><name>mapred.map.tasks</name><value>4</value></property>
<property><name>mapred.child.java.opts</name><value>-Xmx200m</value></property>
<property><name>mapreduce.job.counters.limit</name><value>120</value></property>
<property><name>dfs.https.client.keystore.resource</name><value>ssl-client.xml</value></property>
<property><name>mapred.job.queue.name
</name><value>default</value></property>
<property><name>dfs.https.address</name><value>0.0.0.0:50470
</value></property>
<property><name>mapred.job.tracker.retiredjobs.cache.size</name><value>1000</value></property>
<property><name>dfs.balance.bandwidthPerSec</name><value>1048576</value></property>
<property><name>ipc.server.listen.queue.size</name><value>128</value></property>
<property><name>job.end.retry.interval</name><value>30000</value></property>
<property><name>mapred.inmem.merge.threshold</name><value>1000</value></property>
<property><name>mapred.skip.attempts.to.start.skipping</name><value>2</value></property>
<property><name>mapreduce.tasktracker.outofband.heartbeat.damper</name><value>1000000</value></property>
<property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
<property><name>mapred.reduce.tasks</name><value>0</value></property>
<property><name>mapred.merge.recordsBeforeProgress</name><value>10000</value></property>
<property><name>mapred.userlog.limit.kb</name><value>0</value></property>
<property><name>mapred.job.reduce.memory.mb</name><value>-1</value></property>
<property><name>dfs.max.objects</name><value>0</value></property>
<property><name>webinterface.private.actions</name><value>false</value></property>
<property><name>hadoop.security.token.service.use_ip</name><value>true</value></property>
<property><name>io.sort.spill.percent</name><value>0.80</value></property>
<property><name>mapred.job.shuffle.input.buffer.percent</name><value>0.70</value></property>
<property><name>mapred.job.name</name><value>Giraph:
org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
<property><name>dfs.datanode.dns.nameserver</name><value>default</value></property>
<property><name>mapred.map.tasks.speculative.execution</name><value>false</value></property>
<property><name>hadoop.util.hash.type</name><value>murmur</value></property>
<property><name>dfs.blockreport.intervalMsec</name><value>3600000</value></property>
<property><name>mapred.map.max.attempts</name><value>0</value></property>
<property><name>mapreduce.job.acl-view-job</name><value> </value></property>
<property><name>dfs.client.block.write.retries</name><value>3</value></property>
<property><name>mapred.job.tracker.handler.count</name><value>10</value></property>
<property><name>mapreduce.reduce.shuffle.read.timeout</name><value>180000</value></property>
<property><name>mapred.tasktracker.expiry.interval</name><value>600000</value></property>
<property><name>dfs.https.enable</name><value>false</value></property>
<property><name>mapred.jobtracker.maxtasks.per.job</name><value>-1</value></property>
<property><name>mapred.jobtracker.job.history.block.size</name><value>3145728</value></property>
<property><name>keep.failed.task.files</name><value>false</value></property>
<property><name>mapreduce.outputformat.class</name><value>org.apache.giraph.bsp.BspOutputFormat</value></property>
<property><name>dfs.datanode.failed.volumes.tolerated</name><value>0</value></property>
<property><name>ipc.client.tcpnodelay</name><value>false</value></property>
<property><name>mapred.task.profile.reduces</name><value>0-2</value></property>
<property><name>mapred.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
<property><name>io.map.index.skip</name><value>0</value></property>
<property><name>mapred.working.dir</name><value>hdfs://localhost:54310/user/hduser</value></property>
<property><name>ipc.server.tcpnodelay</name><value>false</value></property>
<property><name>mapred.jobtracker.blacklist.fault-bucket-width</name><value>15</value></property>
<property><name>dfs.namenode.delegation.key.update-interval</name><value>86400000</value></property>
<property><name>mapred.used.genericoptionsparser</name><value>true</value></property>
<property><name>mapred.mapper.new-api</name><value>true</value></property>
<property><name>mapred.job.map.memory.mb</name><value>-1</value></property>
<property><name>dfs.default.chunk.view.size</name><value>32768</value></property>
<property><name>hadoop.logfile.size</name><value>10000000</value></property>
<property><name>mapred.reduce.tasks.speculative.execution</name><value>true</value></property>
<property><name>mapreduce.job.dir</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002</value></property>
<property><name>mapreduce.tasktracker.outofband.heartbeat</name><value>false</value></property>
<property><name>mapreduce.reduce.input.limit</name><value>-1</value></property>
<property><name>dfs.datanode.du.reserved</name><value>0</value></property>
<property><name>hadoop.security.authentication</name><value>simple</value></property>
<property><name>fs.checkpoint.period</name><value>3600</value></property>
<property><name>dfs.web.ugi</name><value>webuser,webgroup</value></property>
<property><name>mapred.job.reuse.jvm.num.tasks</name><value>1</value></property>
<property><name>mapred.jobtracker.completeuserjobs.maximum</name><value>100</value></property>
<property><name>dfs.df.interval</name><value>60000</value></property>
<property><name>giraph.vertexClass</name><value>org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
<property><name>dfs.data.dir</name><value>${hadoop.tmp.dir}/dfs/data</value></property>
<property><name>mapred.task.tracker.task-controller</name><value>org.apache.hadoop.mapred.DefaultTaskController</value></property>
<property><name>giraph.minWorkers</name><value>3</value></property>
<property><name>fs.s3.maxRetries</name><value>4</value></property>
<property><name>dfs.datanode.dns.interface</name><value>default</value></property>
<property><name>mapred.cluster.max.map.memory.mb</name><value>-1</value></property>
<property><name>dfs.support.append</name><value>false</value></property>
<property><name>mapreduce.reduce.shuffle.maxfetchfailures</name><value>10</value></property>
<property><name>mapreduce.job.acl-modify-job</name><value>
</value></property>
<property><name>dfs.permissions.supergroup</name><value>supergroup</value></property>
<property><name>mapred.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value></property>
<property><name>fs.hftp.impl</name><value>org.apache.hadoop.hdfs.HftpFileSystem</value></property>
<property><name>fs.trash.interval</name><value>0</value></property>
<property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
<property><name>dfs.replication.min</name><value>1</value></property>
<property><name>mapred.submit.replication</name><value>10</value></property>
<property><name>fs.har.impl</name><value>org.apache.hadoop.fs.HarFileSystem</value></property>
<property><name>mapred.map.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
<property><name>mapred.tasktracker.dns.interface</name><value>default</value></property>
<property><name>dfs.namenode.decommission.interval</name><value>30</value></property>
<property><name>dfs.http.address</name><value>0.0.0.0:50070
</value></property>
<property><name>dfs.heartbeat.interval</name><value>3</value></property>
<property><name>mapred.job.tracker</name><value>localhost:54311</value></property>
<property><name>mapreduce.job.submithost</name><value>beb-1.bgu.ac.il
</value></property>
<property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
<property><name>giraph.vertexInputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat</value></property>
<property><name>dfs.name.dir</name><value>${hadoop.tmp.dir}/dfs/name</value></property>
<property><name>mapred.line.input.format.linespermap</name><value>1</value></property>
<property><name>mapred.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value></property>
<property><name>dfs.datanode.http.address</name><value>0.0.0.0:50075
</value></property>
<property><name>fs.webhdfs.impl</name><value>org.apache.hadoop.hdfs.web.WebHdfsFileSystem</value></property>
<property><name>mapred.local.dir.minspacekill</name><value>0</value></property>
<property><name>dfs.replication.interval</name><value>3</value></property>
<property><name>io.sort.record.percent</name><value>0.05</value></property>
<property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
<property><name>mapred.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value></property>
<property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>2</value></property>
<property><name>mapreduce.job.user.classpath.first</name><value>true</value></property>
<property><name>dfs.replication</name><value>1</value></property>
<property><name>fs.checkpoint.edits.dir</name><value>${fs.checkpoint.dir}</value></property>
<property><name>mapred.tasktracker.tasks.sleeptime-before-sigkill</name><value>5000</value></property>
<property><name>mapred.job.reduce.input.buffer.percent</name><value>0.0</value></property>
<property><name>mapred.tasktracker.indexcache.mb</name><value>10</value></property>
<property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value></property>
<property><name>hadoop.logfile.count</name><value>10</value></property>
<property><name>mapred.skip.reduce.auto.incr.proc.count</name><value>true</value></property>
<property><name>mapreduce.job.submithostaddress</name><value>127.0.1.1</value></property>
<property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
<property><name>fs.s3.block.size</name><value>67108864</value></property>
<property><name>mapred.tasktracker.taskmemorymanager.monitoring-interval</name><value>5000</value></property>
<property><name>giraph.minPercentResponded</name><value>100.0</value></property>
<property><name>mapred.queue.default.state</name><value>RUNNING</value></property>
<property><name>mapred.acls.enabled</name><value>false</value></property>
<property><name>mapreduce.jobtracker.staging.root.dir</name><value>${hadoop.tmp.dir}/mapred/staging</value></property>
<property><name>mapred.queue.names</name><value>default</value></property>
<property><name>dfs.access.time.precision</name><value>3600000</value></property>
<property><name>fs.hsftp.impl</name><value>org.apache.hadoop.hdfs.HsftpFileSystem</value></property>
<property><name>mapred.task.tracker.http.address</name><value>0.0.0.0:50060
</value></property>
<property><name>mapred.reduce.parallel.copies</name><value>5</value></property>
<property><name>io.seqfile.lazydecompress</name><value>true</value></property>
<property><name>mapred.output.dir</name><value>shortestPathsOutputGraph20</value></property>
<property><name>io.sort.mb</name><value>100</value></property>
<property><name>ipc.client.connection.maxidletime</name><value>10000</value></property>
<property><name>mapred.compress.map.output</name><value>false</value></property>
<property><name>hadoop.security.uid.cache.secs</name><value>14400</value></property>
<property><name>mapred.task.tracker.report.address</name><value>127.0.0.1:0
</value></property>
<property><name>mapred.healthChecker.interval</name><value>60000</value></property>
<property><name>ipc.client.kill.max</name><value>10</value></property>
<property><name>ipc.client.connect.max.retries</name><value>10</value></property>
<property><name>ipc.ping.interval</name><value>300000</value></property>
<property><name>mapreduce.user.classpath.first</name><value>true</value></property>
<property><name>mapreduce.map.class</name><value>org.apache.giraph.graph.GraphMapper</value></property>
<property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
<property><name>mapred.user.jobconf.limit</name><value>5242880</value></property>
<property><name>mapred.input.dir</name><value>hdfs://localhost:54310/user/hduser/shortestPathsInputGraph</value></property>
<property><name>mapred.job.tracker.http.address</name><value>0.0.0.0:50030
</value></property>
<property><name>io.file.buffer.size</name><value>4096</value></property>
<property><name>mapred.jobtracker.restart.recover</name><value>false</value></property>
<property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization</value></property>
<property><name>dfs.datanode.handler.count</name><value>3</value></property>
<property><name>mapred.task.profile</name><value>false</value></property>
<property><name>dfs.replication.considerLoad</name><value>true</value></property>
<property><name>jobclient.output.filter</name><value>FAILED</value></property>
<property><name>dfs.namenode.delegation.token.max-lifetime</name><value>604800000</value></property>
<property><name>mapred.tasktracker.map.tasks.maximum</name><value>4</value></property>
<property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value></property>
<property><name>fs.checkpoint.size</name><value>67108864</value></property>
</configuration>

Re: giraph - problem running shortest path example

Posted by Gil Tselenchuk <gi...@gmail.com>.
Hi

I still trying to run the giraph shortest path example.
So far I resolved most of the WARN from the hadoop userlogs files

But I still got this problem shown on the terminal:

13/01/19 16:34:48 INFO mapred.JobClient: Running job: job_201301191140_0006
13/01/19 16:34:49 INFO mapred.JobClient:  map 0% reduce 0%
13/01/19 16:35:07 INFO mapred.JobClient:  map 25% reduce 0%
13/01/19 16:35:10 INFO mapred.JobClient:  map 50% reduce 0%
13/01/19 16:35:16 INFO mapred.JobClient:  map 75% reduce 0%
13/01/19 16:35:18 INFO mapred.JobClient: Task Id :
attempt_201301191140_0006_m_000000_0, Status : FAILED
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)

13/01/19 16:35:28 INFO mapred.JobClient:  map 100% reduce 0%
13/01/19 16:35:55 INFO mapred.JobClient:  map 75% reduce 0%
13/01/19 16:36:00 INFO mapred.JobClient: Task Id :
attempt_201301191140_0006_m_000000_1, Status : FAILED
java.lang.Throwable: Child Error
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
Caused by: java.io.IOException: Task process exit with nonzero status of 1.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)


And in the logs file I funded this FATAL...

2013-01-19 16:34:58,851 INFO org.apache.giraph.comm.netty.NettyServer:
start: Started server communication server:
beb-1.bgu.ac.il/127.0.1.1:30000with up to 16 threads on bind attempt 0
with sendBufferSize = 32768
receiveBufferSize = 524288 backlog = 3
2013-01-19 16:34:58,852 INFO org.apache.giraph.graph.BspServiceMaster:
becomeMaster: I am now the master!
2013-01-19 16:34:58,861 INFO org.apache.giraph.graph.BspService: process:
applicationAttemptChanged signaled
2013-01-19 16:34:58,867 WARN org.apache.giraph.graph.BspService: process:
Unknown and unprocessed event
(path=/_hadoopBsp/job_201301191140_0006/_applicationAttemptsDir/0/_superstepDir,
type=NodeChildrenChanged, state=SyncConnected)
2013-01-19 16:35:07,548 FATAL org.apache.giraph.graph.GraphMapper:
uncaughtException: OverrideExceptionHandler on thread
org.apache.giraph.graph.MasterThread, msg = generateVertexInputSplits: Got
IOException, exiting...
java.lang.IllegalStateException: generateVertexInputSplits: Got IOException
at
org.apache.giraph.graph.BspServiceMaster.generateInputSplits(BspServiceMaster.java:259)
at
org.apache.giraph.graph.BspServiceMaster.createInputSplits(BspServiceMaster.java:557)
at
org.apache.giraph.graph.BspServiceMaster.createVertexInputSplits(BspServiceMaster.java:622)
at org.apache.giraph.graph.MasterThread.run(MasterThread.java:102)
Caused by: java.io.IOException: No input paths specified in job
at
org.apache.giraph.io.GiraphFileInputFormat.listStatus(GiraphFileInputFormat.java:191)
at
org.apache.giraph.io.GiraphFileInputFormat.listVertexStatus(GiraphFileInputFormat.java:251)
at
org.apache.giraph.io.GiraphFileInputFormat.getVertexSplits(GiraphFileInputFormat.java:322)
at
org.apache.giraph.io.TextVertexInputFormat.getSplits(TextVertexInputFormat.java:61)
at
org.apache.giraph.graph.BspServiceMaster.generateInputSplits(BspServiceMaster.java:257)
... 3 more
2013-01-19 16:35:07,550 INFO org.apache.giraph.zk.ZooKeeperManager: run:
Shutdown hook started.
2013-01-19 16:35:07,550 WARN org.apache.giraph.zk.ZooKeeperManager:
onlineZooKeeperServers: Forced a shutdown hook kill of the ZooKeeper
process.
2013-01-19 16:35:07,550 INFO org.apache.giraph.zk.ZooKeeperManager:
onlineZooKeeperServers: ZooKeeper process exited with 1 (note that 143
typically means killed).

I hope you can help my to solve this problem.
till now I tried to run this on a pseudo distributed cluster and also on my
real working Hadoop cluster, and got the same error.

Thanks for the help
Gil

Re: giraph - problem running shortest path example

Posted by Claudio Martella <cl...@gmail.com>.
you should investigate the hadoop logs. This usually happened to me when
running OOM.

On Sat, Jan 5, 2013 at 2:50 PM, Gil Tselenchuk <gi...@gmail.com> wrote:

> Hello Eli Reisman,
> and thanks for your help.
>
> I run the "mvn" command like you sad and I have no errors, now I can run
> the PageRank example successfully ,so It's good.
>
> But when I run the *SimpleShortestPathsVertex *example I got a new error
> that looks like that::
> --------------
> *hadoop jar giraph/target/Giraph.jar org.apache.giraph.GiraphRunner
> org.apache.giraph.examples.SimpleShortestPathsVertex -if
> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat -ip
> /user/hduser/shortestPathsInputGraph/ -of
> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat -op
> shortestPathsOutputGraph25 -w 3*
> *
> *
> *13/01/05 15:14:51 INFO mapred.JobClient: Running job:
> job_201301051323_0010*
> *13/01/05 15:14:52 INFO mapred.JobClient:  map 0% reduce 0%*
> *13/01/05 15:15:10 INFO mapred.JobClient:  map 25% reduce 0%*
> *13/01/05 15:15:13 INFO mapred.JobClient:  map 50% reduce 0%*
> *13/01/05 15:15:19 INFO mapred.JobClient:  map 75% reduce 0%*
> *13/01/05 15:15:21 INFO mapred.JobClient: Task Id :
> attempt_201301051323_0010_m_000000_0, Status : FAILED*
> *java.lang.Throwable: Child Error*
> * at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)*
> *Caused by: java.io.IOException: Task process exit with nonzero status of
> 1.*
> * at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)*
> *
> *
> *13/01/05 15:15:31 INFO mapred.JobClient:  map 100% reduce 0%*
> *13/01/05 15:20:34 INFO mapred.JobClient: Job complete:
> job_201301051323_0010*
> *13/01/05 15:20:34 INFO mapred.JobClient: Counters: 5*
> *13/01/05 15:20:34 INFO mapred.JobClient:   Job Counters *
> *13/01/05 15:20:34 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1277806*
> *13/01/05 15:20:34 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0*
> *13/01/05 15:20:34 INFO mapred.JobClient:     Total time spent by all
> maps waiting after reserving slots (ms)=0*
> *13/01/05 15:20:34 INFO mapred.JobClient:     Launched map tasks=5*
> *13/01/05 15:20:34 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=4055*
>
> ------------
> When I google for this problem, I found nothing that help me (it is not "log
> files becomes too full" problem. )
> What I am doing wrong ???
>
> Thanks
> Gil
>
> 2012/12/26 Eli Reisman <ap...@gmail.com>
>
>> If you use GiraphRunner, I think you want to use the bin/giraph run
>> script. Alternately, use the hadoop jar command as you did but include the
>> giraph fat-jar and only name the fully qualified class of the example you
>> want to run, instead of inlcuding the path to GiraphRunner before it.
>> Either might work. Also, if you run on Hadoop 1.0.3, try to build the
>> Giraph fat jar this way:
>>
>> mvn -Phadoop_1.0 clean package
>>
>> and see what happens. Good luck!
>>
>>
>> On Sat, Dec 22, 2012 at 5:09 AM, Gil Tselenchuk <gi...@gmail.com> wrote:
>>
>>> hello friend,
>>>
>>> I have a problem running giraph example "shortest path" over hadoop.
>>> I install:
>>> 1. debian OS.
>>> 2. hadoop (version 1.0.3) single node, that run "word count" on it
>>> (successfully)
>>> 3. maven 3.0.4
>>> 4. and I use the inputs from the Apache Giraph site.
>>>
>>> *And when I run the example, there is no output, only logs as follow.*
>>> *What can I do??*
>>>
>>> Thanks, Gil
>>>
>>>
>>> *Terminal output:*
>>>
>>> hduser@beb-1:/usr/local/giraph-trunk$ *hadoop jar
>>> /home/hduser/Desktop/giraph.jar org.apache.giraph.GiraphRunner
>>> org.apache.giraph.examples.SimpleShortestPathsVertex -if
>>> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat -ip
>>> /user/hduser/shortestPathsInputGraph/ -of
>>> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat -op
>>> shortestPathsOutputGraph20 -w 3*
>>>
>>> 12/12/05 16:27:33 INFO graph.GiraphJob: run: Since checkpointing is
>>> disabled (default), do not allow any task retries (setting
>>> mapred.map.max.attempts = 0, old value = 4)
>>> 12/12/05 16:27:34 INFO mapred.JobClient: Running job:
>>> job_201212051558_0002
>>> 12/12/05 16:27:35 INFO mapred.JobClient:  map 0% reduce 0%
>>> 12/12/05 16:27:52 INFO mapred.JobClient:  map 25% reduce 0%
>>> 12/12/05 16:27:55 INFO mapred.JobClient:  map 50% reduce 0%
>>> 12/12/05 16:28:01 INFO mapred.JobClient:  map 75% reduce 0%
>>> 12/12/05 16:38:36 INFO mapred.JobClient:  map 50% reduce 0%
>>> 12/12/05 16:38:41 INFO mapred.JobClient: Job complete:
>>> job_201212051558_0002
>>> 12/12/05 16:38:41 INFO mapred.JobClient: Counters: 6
>>> 12/12/05 16:38:41 INFO mapred.JobClient:   Job Counters
>>> 12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1947284
>>> 12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all
>>> reduces waiting after reserving slots (ms)=0
>>> 12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all
>>> maps waiting after reserving slots (ms)=0
>>> 12/12/05 16:38:41 INFO mapred.JobClient:     Launched map tasks=4
>>> 12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>>> 12/12/05 16:38:41 INFO mapred.JobClient:     Failed map tasks=1
>>>
>>>
>>>
>>>
>>> -------------------------------------
>>> attempt_201212051558_0002_m_000000_0 task_201212051558_0002_m_000000<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000000>
>>> beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED
>>>
>>> java.lang.Throwable: Child Error
>>> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>>> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>>
>>>
>>> attempt_201212051558_0002_m_000001_0task_201212051558_0002_m_000001<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000001>
>>> beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED
>>>
>>> java.lang.IllegalStateException: run: Caught an unrecoverable exception exists: Failed to check /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions after 3 tries!
>>> 	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:768)
>>> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>>> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>>> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>>> 	at java.security.AccessController.doPrivileged(Native Method)
>>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>> 	at org.apache.hadoop.mapred.Child.main(Child.java:249)
>>> Caused by: java.lang.IllegalStateException: exists: Failed to check /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions after 3 tries!
>>> 	at org.apache.giraph.zk.ZooKeeperExt.exists(ZooKeeperExt.java:369)
>>> 	at org.apache.giraph.graph.BspServiceWorker.startSuperstep(BspServiceWorker.java:653)
>>> 	at org.apache.giraph.graph.BspServiceWorker.setup(BspServiceWorker.java:452)
>>> 	at org.apache.giraph.graph.GraphMapper.map(GraphMapper.java:540)
>>> 	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:739)
>>> 	... 7 more
>>>
>>> -------
>>> Task attempt_201212051558_0002_m_000001_0 failed to report status for 602 seconds. Killing!
>>>
>>>
>>>
>>> shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
>>> /_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
>>> /history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
>>> /job_201212051558_0002_1354717653975_hduser_Giraph%3A+org.apache.giraph.examples.SimpleShortestP
>>>
>>> Meta VERSION="1" .
>>> Job JOBID="job_201212051558_0002" JOBNAME="Giraph:
>>> org\.apache\.giraph\.examples\.SimpleShortestPathsVertex" USER="hduser"
>>> SUBMIT_TIME="1354717653975"
>>> JOBCONF="hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/\.staging/job_201212051558_0002/job\.xml"
>>> VIEW_JOB="*" MODIFY_JOB="*" JOB_QUEUE="default" .
>>> Job JOBID="job_201212051558_0002" JOB_PRIORITY="NORMAL" .
>>> Job JOBID="job_201212051558_0002" LAUNCH_TIME="1354717654079"
>>> TOTAL_MAPS="4" TOTAL_REDUCES="0" JOB_STATUS="PREP" .
>>> Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
>>> START_TIME="1354717655256" SPLITS="" .
>>> MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
>>> START_TIME="1354717655984"
>>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>>> HTTP_PORT="50060" .
>>> MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
>>> TASK_STATUS="SUCCESS" FINISH_TIME="1354717659857"
>>> HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="setup"
>>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>>> snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
>>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>>> \\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>>> snapshot)(493641728)]}" .
>>> Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
>>> TASK_STATUS="SUCCESS" FINISH_TIME="1354717661263"
>>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>>> snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
>>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>>> \\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>>> snapshot)(493641728)]}" .
>>> Job JOBID="job_201212051558_0002" JOB_STATUS="RUNNING" .
>>> Task TASKID="task_201212051558_0002_m_000000" TASK_TYPE="MAP"
>>> START_TIME="1354717661265" SPLITS="" .
>>> Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
>>> START_TIME="1354717664272" SPLITS="" .
>>> Task TASKID="task_201212051558_0002_m_000002" TASK_TYPE="MAP"
>>> START_TIME="1354717667275" SPLITS="" .
>>> Task TASKID="task_201212051558_0002_m_000003" TASK_TYPE="MAP"
>>> START_TIME="1354717670282" SPLITS="" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0"
>>> START_TIME="1354717661271"
>>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>>> HTTP_PORT="50060" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0" TASK_STATUS="FAILED"
>>> FINISH_TIME="1354717675621" HOSTNAME="beb-1\.bgu\.ac\.il"
>>> ERROR="java\.lang\.Throwable: Child Error
>>>  at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:271)
>>> Caused by: java\.io\.IOException: Task process exit with nonzero status
>>> of 1\.
>>>  at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:258)
>>> " .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0"
>>> START_TIME="1354717664274"
>>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>>> HTTP_PORT="50060" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0" TASK_STATUS="FAILED"
>>> FINISH_TIME="1354718312785" HOSTNAME="beb-1\.bgu\.ac\.il"
>>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>>> exception exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>>  at
>>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at
>>> org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>>> at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>>  at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>>> \.\.\. 7 more
>>> ,Task attempt_201212051558_0002_m_000001_0 failed to report status for
>>> 602 seconds\. Killing!" .
>>> Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
>>> TASK_STATUS="FAILED" FINISH_TIME="1354718312785"
>>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>>> exception exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>>  at
>>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at
>>> org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>>> at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>>  at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>>> \.\.\. 7 more
>>> ,Task attempt_201212051558_0002_m_000001_0 failed to report status for
>>> 602 seconds\. Killing!" TASK_ATTEMPT_ID="" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0"
>>> START_TIME="1354717670284"
>>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>>> HTTP_PORT="50060" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0" TASK_STATUS="FAILED"
>>> FINISH_TIME="1354718312799" HOSTNAME="beb-1\.bgu\.ac\.il"
>>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>>> exception exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>>  at
>>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at
>>> org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>>> at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>>  at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>>> \.\.\. 7 more
>>> ,Task attempt_201212051558_0002_m_000003_0 failed to report status for
>>> 602 seconds\. Killing!" .
>>> Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
>>> START_TIME="1354718315782" SPLITS="" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0"
>>> START_TIME="1354717667278"
>>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>>> HTTP_PORT="50060" .
>>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0" TASK_STATUS="FAILED"
>>> FINISH_TIME="1354718315788" HOSTNAME="beb-1\.bgu\.ac\.il"
>>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>>> exception exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>>  at
>>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>>> after 3 tries!
>>>  at
>>> org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>>> at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>>  at
>>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>>> \.\.\. 7 more
>>> ,Task attempt_201212051558_0002_m_000002_0 failed to report status for
>>> 602 seconds\. Killing!" .
>>> MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
>>> START_TIME="1354718315790"
>>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>>> HTTP_PORT="50060" .
>>> MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
>>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
>>> TASK_STATUS="SUCCESS" FINISH_TIME="1354718319665"
>>> HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="cleanup"
>>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>>> snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
>>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>>> \\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>>> snapshot)(565895168)]}" .
>>> Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
>>> TASK_STATUS="SUCCESS" FINISH_TIME="1354718321788"
>>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>>> snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
>>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>>> \\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>>> snapshot)(565895168)]}" .
>>> Job JOBID="job_201212051558_0002" FINISH_TIME="1354718321789"
>>> JOB_STATUS="FAILED" FINISHED_MAPS="0" FINISHED_REDUCES="0" FAIL_REASON="#
>>> of failed Map Tasks exceeded allowed limit\. FailedCount: 1\.
>>> LastFailedTask: task_201212051558_0002_m_000001" .
>>>
>>> shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
>>> /_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
>>> /history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
>>> /job_201212051558_0002_conf.xml
>>>
>>> <?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
>>>
>>> <property><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
>>>
>>> <property><name>mapred.task.cache.levels</name><value>2</value></property>
>>>
>>> <property><name>giraph.vertexOutputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat</value></property>
>>>
>>> <property><name>hadoop.tmp.dir</name><value>/app/hadoop/tmp</value></property>
>>> <property><name>hadoop.native.lib</name><value>true</value></property>
>>>
>>> <property><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value></property>
>>>
>>> <property><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value></property>
>>>
>>> <property><name>dfs.https.need.client.auth</name><value>false</value></property>
>>>
>>> <property><name>ipc.client.idlethreshold</name><value>4000</value></property>
>>>
>>> <property><name>dfs.datanode.data.dir.perm</name><value>755</value></property>
>>>
>>> <property><name>mapred.system.dir</name><value>${hadoop.tmp.dir}/mapred/system</value></property>
>>>
>>> <property><name>mapred.job.tracker.persist.jobstatus.hours</name><value>0</value></property>
>>> <property><name>dfs.datanode.address</name><value>0.0.0.0:50010
>>> </value></property>
>>>
>>> <property><name>dfs.namenode.logging.level</name><value>info</value></property>
>>>
>>> <property><name>dfs.block.access.token.enable</name><value>false</value></property>
>>>
>>> <property><name>io.skip.checksum.errors</name><value>false</value></property>
>>> <property><name>fs.default.name
>>> </name><value>hdfs://localhost:54310</value></property>
>>>
>>> <property><name>mapred.cluster.reduce.memory.mb</name><value>-1</value></property>
>>> <property><name>mapred.child.tmp</name><value>./tmp</value></property>
>>>
>>> <property><name>fs.har.impl.disable.cache</name><value>true</value></property>
>>>
>>> <property><name>dfs.safemode.threshold.pct</name><value>0.999f</value></property>
>>>
>>> <property><name>mapred.skip.reduce.max.skip.groups</name><value>0</value></property>
>>>
>>> <property><name>dfs.namenode.handler.count</name><value>10</value></property>
>>>
>>> <property><name>dfs.blockreport.initialDelay</name><value>0</value></property>
>>>
>>> <property><name>mapred.heartbeats.in.second</name><value>100</value></property>
>>>
>>> <property><name>mapred.tasktracker.dns.nameserver</name><value>default</value></property>
>>> <property><name>io.sort.factor</name><value>10</value></property>
>>>
>>> <property><name>mapred.task.timeout</name><value>600000</value></property>
>>> <property><name>giraph.maxWorkers</name><value>3</value></property>
>>>
>>> <property><name>mapred.max.tracker.failures</name><value>4</value></property>
>>>
>>> <property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
>>>
>>> <property><name>mapred.job.tracker.jobhistory.lru.cache.size</name><value>5</value></property>
>>>
>>> <property><name>fs.hdfs.impl</name><value>org.apache.hadoop.hdfs.DistributedFileSystem</value></property>
>>>
>>> <property><name>mapred.queue.default.acl-administer-jobs</name><value>*</value></property>
>>>
>>> <property><name>dfs.block.access.key.update.interval</name><value>600</value></property>
>>>
>>> <property><name>mapred.skip.map.auto.incr.proc.count</name><value>true</value></property>
>>>
>>> <property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>true</value></property>
>>>
>>> <property><name>io.mapfile.bloom.size</name><value>1048576</value></property>
>>>
>>> <property><name>mapreduce.reduce.shuffle.connect.timeout</name><value>180000</value></property>
>>>
>>> <property><name>dfs.safemode.extension</name><value>30000</value></property>
>>>
>>> <property><name>mapred.jobtracker.blacklist.fault-timeout-window</name><value>180</value></property>
>>>
>>> <property><name>tasktracker.http.threads</name><value>40</value></property>
>>>
>>> <property><name>mapred.job.shuffle.merge.percent</name><value>0.66</value></property>
>>>
>>> <property><name>mapreduce.inputformat.class</name><value>org.apache.giraph.bsp.BspInputFormat</value></property>
>>>
>>> <property><name>fs.ftp.impl</name><value>org.apache.hadoop.fs.ftp.FTPFileSystem</value></property>
>>> <property><name>user.name</name><value>hduser</value></property>
>>>
>>> <property><name>mapred.output.compress</name><value>false</value></property>
>>> <property><name>io.bytes.per.checksum</name><value>512</value></property>
>>>
>>> <property><name>mapred.combine.recordsBeforeProgress</name><value>10000</value></property>
>>>
>>> <property><name>mapred.healthChecker.script.timeout</name><value>600000</value></property>
>>>
>>> <property><name>topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value></property>
>>>
>>> <property><name>dfs.https.server.keystore.resource</name><value>ssl-server.xml</value></property>
>>>
>>> <property><name>mapred.reduce.slowstart.completed.maps</name><value>0.05</value></property>
>>>
>>> <property><name>mapred.reduce.max.attempts</name><value>4</value></property>
>>>
>>> <property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
>>>
>>> <property><name>dfs.block.access.token.lifetime</name><value>600</value></property>
>>>
>>> <property><name>dfs.name.edits.dir</name><value>${dfs.name.dir}</value></property>
>>>
>>> <property><name>mapred.skip.map.max.skip.records</name><value>0</value></property>
>>>
>>> <property><name>mapred.cluster.map.memory.mb</name><value>-1</value></property>
>>>
>>> <property><name>hadoop.security.group.mapping</name><value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value></property>
>>>
>>> <property><name>mapred.job.tracker.persist.jobstatus.dir</name><value>/jobtracker/jobsInfo</value></property>
>>>
>>> <property><name>mapred.jar</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002/job.jar</value></property>
>>> <property><name>dfs.block.size</name><value>67108864</value></property>
>>>
>>> <property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
>>> <property><name>job.end.retry.attempts</name><value>0</value></property>
>>>
>>> <property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
>>>
>>> <property><name>mapred.local.dir.minspacestart</name><value>0</value></property>
>>>
>>> <property><name>mapred.output.compression.type</name><value>RECORD</value></property>
>>> <property><name>dfs.datanode.ipc.address</name><value>0.0.0.0:50020
>>> </value></property>
>>> <property><name>dfs.permissions</name><value>true</value></property>
>>>
>>> <property><name>topology.script.number.args</name><value>100</value></property>
>>>
>>> <property><name>io.mapfile.bloom.error.rate</name><value>0.005</value></property>
>>>
>>> <property><name>mapred.cluster.max.reduce.memory.mb</name><value>-1</value></property>
>>>
>>> <property><name>mapred.max.tracker.blacklists</name><value>4</value></property>
>>>
>>> <property><name>mapred.task.profile.maps</name><value>0-2</value></property>
>>> <property><name>dfs.datanode.https.address</name><value>0.0.0.0:50475
>>> </value></property>
>>>
>>> <property><name>mapred.userlog.retain.hours</name><value>24</value></property>
>>> <property><name>dfs.secondary.http.address</name><value>0.0.0.0:50090
>>> </value></property>
>>> <property><name>dfs.replication.max</name><value>512</value></property>
>>>
>>> <property><name>mapred.job.tracker.persist.jobstatus.active</name><value>false</value></property>
>>>
>>> <property><name>hadoop.security.authorization</name><value>false</value></property>
>>>
>>> <property><name>local.cache.size</name><value>10737418240</value></property>
>>>
>>> <property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value></property>
>>> <property><name>mapred.min.split.size</name><value>0</value></property>
>>> <property><name>mapred.map.tasks</name><value>4</value></property>
>>>
>>> <property><name>mapred.child.java.opts</name><value>-Xmx200m</value></property>
>>>
>>> <property><name>mapreduce.job.counters.limit</name><value>120</value></property>
>>>
>>> <property><name>dfs.https.client.keystore.resource</name><value>ssl-client.xml</value></property>
>>> <property><name>mapred.job.queue.name
>>> </name><value>default</value></property>
>>> <property><name>dfs.https.address</name><value>0.0.0.0:50470
>>> </value></property>
>>>
>>> <property><name>mapred.job.tracker.retiredjobs.cache.size</name><value>1000</value></property>
>>>
>>> <property><name>dfs.balance.bandwidthPerSec</name><value>1048576</value></property>
>>>
>>> <property><name>ipc.server.listen.queue.size</name><value>128</value></property>
>>>
>>> <property><name>job.end.retry.interval</name><value>30000</value></property>
>>>
>>> <property><name>mapred.inmem.merge.threshold</name><value>1000</value></property>
>>>
>>> <property><name>mapred.skip.attempts.to.start.skipping</name><value>2</value></property>
>>>
>>> <property><name>mapreduce.tasktracker.outofband.heartbeat.damper</name><value>1000000</value></property>
>>>
>>> <property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
>>> <property><name>mapred.reduce.tasks</name><value>0</value></property>
>>>
>>> <property><name>mapred.merge.recordsBeforeProgress</name><value>10000</value></property>
>>> <property><name>mapred.userlog.limit.kb</name><value>0</value></property>
>>>
>>> <property><name>mapred.job.reduce.memory.mb</name><value>-1</value></property>
>>> <property><name>dfs.max.objects</name><value>0</value></property>
>>>
>>> <property><name>webinterface.private.actions</name><value>false</value></property>
>>>
>>> <property><name>hadoop.security.token.service.use_ip</name><value>true</value></property>
>>>
>>> <property><name>io.sort.spill.percent</name><value>0.80</value></property>
>>>
>>> <property><name>mapred.job.shuffle.input.buffer.percent</name><value>0.70</value></property>
>>> <property><name>mapred.job.name</name><value>Giraph:
>>> org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
>>>
>>> <property><name>dfs.datanode.dns.nameserver</name><value>default</value></property>
>>>
>>> <property><name>mapred.map.tasks.speculative.execution</name><value>false</value></property>
>>>
>>> <property><name>hadoop.util.hash.type</name><value>murmur</value></property>
>>>
>>> <property><name>dfs.blockreport.intervalMsec</name><value>3600000</value></property>
>>> <property><name>mapred.map.max.attempts</name><value>0</value></property>
>>> <property><name>mapreduce.job.acl-view-job</name><value>
>>> </value></property>
>>>
>>> <property><name>dfs.client.block.write.retries</name><value>3</value></property>
>>>
>>> <property><name>mapred.job.tracker.handler.count</name><value>10</value></property>
>>>
>>> <property><name>mapreduce.reduce.shuffle.read.timeout</name><value>180000</value></property>
>>>
>>> <property><name>mapred.tasktracker.expiry.interval</name><value>600000</value></property>
>>> <property><name>dfs.https.enable</name><value>false</value></property>
>>>
>>> <property><name>mapred.jobtracker.maxtasks.per.job</name><value>-1</value></property>
>>>
>>> <property><name>mapred.jobtracker.job.history.block.size</name><value>3145728</value></property>
>>>
>>> <property><name>keep.failed.task.files</name><value>false</value></property>
>>>
>>> <property><name>mapreduce.outputformat.class</name><value>org.apache.giraph.bsp.BspOutputFormat</value></property>
>>>
>>> <property><name>dfs.datanode.failed.volumes.tolerated</name><value>0</value></property>
>>>
>>> <property><name>ipc.client.tcpnodelay</name><value>false</value></property>
>>>
>>> <property><name>mapred.task.profile.reduces</name><value>0-2</value></property>
>>>
>>> <property><name>mapred.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
>>> <property><name>io.map.index.skip</name><value>0</value></property>
>>>
>>> <property><name>mapred.working.dir</name><value>hdfs://localhost:54310/user/hduser</value></property>
>>>
>>> <property><name>ipc.server.tcpnodelay</name><value>false</value></property>
>>>
>>> <property><name>mapred.jobtracker.blacklist.fault-bucket-width</name><value>15</value></property>
>>>
>>> <property><name>dfs.namenode.delegation.key.update-interval</name><value>86400000</value></property>
>>>
>>> <property><name>mapred.used.genericoptionsparser</name><value>true</value></property>
>>>
>>> <property><name>mapred.mapper.new-api</name><value>true</value></property>
>>>
>>> <property><name>mapred.job.map.memory.mb</name><value>-1</value></property>
>>>
>>> <property><name>dfs.default.chunk.view.size</name><value>32768</value></property>
>>>
>>> <property><name>hadoop.logfile.size</name><value>10000000</value></property>
>>>
>>> <property><name>mapred.reduce.tasks.speculative.execution</name><value>true</value></property>
>>>
>>> <property><name>mapreduce.job.dir</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002</value></property>
>>>
>>> <property><name>mapreduce.tasktracker.outofband.heartbeat</name><value>false</value></property>
>>>
>>> <property><name>mapreduce.reduce.input.limit</name><value>-1</value></property>
>>>
>>> <property><name>dfs.datanode.du.reserved</name><value>0</value></property>
>>>
>>> <property><name>hadoop.security.authentication</name><value>simple</value></property>
>>> <property><name>fs.checkpoint.period</name><value>3600</value></property>
>>>
>>> <property><name>dfs.web.ugi</name><value>webuser,webgroup</value></property>
>>>
>>> <property><name>mapred.job.reuse.jvm.num.tasks</name><value>1</value></property>
>>>
>>> <property><name>mapred.jobtracker.completeuserjobs.maximum</name><value>100</value></property>
>>> <property><name>dfs.df.interval</name><value>60000</value></property>
>>>
>>> <property><name>giraph.vertexClass</name><value>org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
>>>
>>> <property><name>dfs.data.dir</name><value>${hadoop.tmp.dir}/dfs/data</value></property>
>>>
>>> <property><name>mapred.task.tracker.task-controller</name><value>org.apache.hadoop.mapred.DefaultTaskController</value></property>
>>> <property><name>giraph.minWorkers</name><value>3</value></property>
>>> <property><name>fs.s3.maxRetries</name><value>4</value></property>
>>>
>>> <property><name>dfs.datanode.dns.interface</name><value>default</value></property>
>>>
>>> <property><name>mapred.cluster.max.map.memory.mb</name><value>-1</value></property>
>>> <property><name>dfs.support.append</name><value>false</value></property>
>>>
>>> <property><name>mapreduce.reduce.shuffle.maxfetchfailures</name><value>10</value></property>
>>> <property><name>mapreduce.job.acl-modify-job</name><value>
>>> </value></property>
>>>
>>> <property><name>dfs.permissions.supergroup</name><value>supergroup</value></property>
>>>
>>> <property><name>mapred.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value></property>
>>>
>>> <property><name>fs.hftp.impl</name><value>org.apache.hadoop.hdfs.HftpFileSystem</value></property>
>>> <property><name>fs.trash.interval</name><value>0</value></property>
>>> <property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
>>> <property><name>dfs.replication.min</name><value>1</value></property>
>>>
>>> <property><name>mapred.submit.replication</name><value>10</value></property>
>>>
>>> <property><name>fs.har.impl</name><value>org.apache.hadoop.fs.HarFileSystem</value></property>
>>>
>>> <property><name>mapred.map.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
>>>
>>> <property><name>mapred.tasktracker.dns.interface</name><value>default</value></property>
>>>
>>> <property><name>dfs.namenode.decommission.interval</name><value>30</value></property>
>>> <property><name>dfs.http.address</name><value>0.0.0.0:50070
>>> </value></property>
>>> <property><name>dfs.heartbeat.interval</name><value>3</value></property>
>>>
>>> <property><name>mapred.job.tracker</name><value>localhost:54311</value></property>
>>> <property><name>mapreduce.job.submithost</name><value>beb-1.bgu.ac.il
>>> </value></property>
>>>
>>> <property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
>>>
>>> <property><name>giraph.vertexInputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat</value></property>
>>>
>>> <property><name>dfs.name.dir</name><value>${hadoop.tmp.dir}/dfs/name</value></property>
>>>
>>> <property><name>mapred.line.input.format.linespermap</name><value>1</value></property>
>>>
>>> <property><name>mapred.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value></property>
>>> <property><name>dfs.datanode.http.address</name><value>0.0.0.0:50075
>>> </value></property>
>>>
>>> <property><name>fs.webhdfs.impl</name><value>org.apache.hadoop.hdfs.web.WebHdfsFileSystem</value></property>
>>>
>>> <property><name>mapred.local.dir.minspacekill</name><value>0</value></property>
>>>
>>> <property><name>dfs.replication.interval</name><value>3</value></property>
>>>
>>> <property><name>io.sort.record.percent</name><value>0.05</value></property>
>>>
>>> <property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
>>>
>>> <property><name>mapred.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value></property>
>>>
>>> <property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>2</value></property>
>>>
>>> <property><name>mapreduce.job.user.classpath.first</name><value>true</value></property>
>>> <property><name>dfs.replication</name><value>1</value></property>
>>>
>>> <property><name>fs.checkpoint.edits.dir</name><value>${fs.checkpoint.dir}</value></property>
>>>
>>> <property><name>mapred.tasktracker.tasks.sleeptime-before-sigkill</name><value>5000</value></property>
>>>
>>> <property><name>mapred.job.reduce.input.buffer.percent</name><value>0.0</value></property>
>>>
>>> <property><name>mapred.tasktracker.indexcache.mb</name><value>10</value></property>
>>>
>>> <property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value></property>
>>> <property><name>hadoop.logfile.count</name><value>10</value></property>
>>>
>>> <property><name>mapred.skip.reduce.auto.incr.proc.count</name><value>true</value></property>
>>>
>>> <property><name>mapreduce.job.submithostaddress</name><value>127.0.1.1</value></property>
>>>
>>> <property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
>>> <property><name>fs.s3.block.size</name><value>67108864</value></property>
>>>
>>> <property><name>mapred.tasktracker.taskmemorymanager.monitoring-interval</name><value>5000</value></property>
>>>
>>> <property><name>giraph.minPercentResponded</name><value>100.0</value></property>
>>>
>>> <property><name>mapred.queue.default.state</name><value>RUNNING</value></property>
>>> <property><name>mapred.acls.enabled</name><value>false</value></property>
>>>
>>> <property><name>mapreduce.jobtracker.staging.root.dir</name><value>${hadoop.tmp.dir}/mapred/staging</value></property>
>>>
>>> <property><name>mapred.queue.names</name><value>default</value></property>
>>>
>>> <property><name>dfs.access.time.precision</name><value>3600000</value></property>
>>>
>>> <property><name>fs.hsftp.impl</name><value>org.apache.hadoop.hdfs.HsftpFileSystem</value></property>
>>> <property><name>mapred.task.tracker.http.address</name><value>
>>> 0.0.0.0:50060</value></property>
>>>
>>> <property><name>mapred.reduce.parallel.copies</name><value>5</value></property>
>>>
>>> <property><name>io.seqfile.lazydecompress</name><value>true</value></property>
>>>
>>> <property><name>mapred.output.dir</name><value>shortestPathsOutputGraph20</value></property>
>>> <property><name>io.sort.mb</name><value>100</value></property>
>>>
>>> <property><name>ipc.client.connection.maxidletime</name><value>10000</value></property>
>>>
>>> <property><name>mapred.compress.map.output</name><value>false</value></property>
>>>
>>> <property><name>hadoop.security.uid.cache.secs</name><value>14400</value></property>
>>> <property><name>mapred.task.tracker.report.address</name><value>
>>> 127.0.0.1:0</value></property>
>>>
>>> <property><name>mapred.healthChecker.interval</name><value>60000</value></property>
>>> <property><name>ipc.client.kill.max</name><value>10</value></property>
>>>
>>> <property><name>ipc.client.connect.max.retries</name><value>10</value></property>
>>> <property><name>ipc.ping.interval</name><value>300000</value></property>
>>>
>>> <property><name>mapreduce.user.classpath.first</name><value>true</value></property>
>>>
>>> <property><name>mapreduce.map.class</name><value>org.apache.giraph.graph.GraphMapper</value></property>
>>>
>>> <property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
>>>
>>> <property><name>mapred.user.jobconf.limit</name><value>5242880</value></property>
>>>
>>> <property><name>mapred.input.dir</name><value>hdfs://localhost:54310/user/hduser/shortestPathsInputGraph</value></property>
>>> <property><name>mapred.job.tracker.http.address</name><value>
>>> 0.0.0.0:50030</value></property>
>>> <property><name>io.file.buffer.size</name><value>4096</value></property>
>>>
>>> <property><name>mapred.jobtracker.restart.recover</name><value>false</value></property>
>>>
>>> <property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization</value></property>
>>>
>>> <property><name>dfs.datanode.handler.count</name><value>3</value></property>
>>> <property><name>mapred.task.profile</name><value>false</value></property>
>>>
>>> <property><name>dfs.replication.considerLoad</name><value>true</value></property>
>>>
>>> <property><name>jobclient.output.filter</name><value>FAILED</value></property>
>>>
>>> <property><name>dfs.namenode.delegation.token.max-lifetime</name><value>604800000</value></property>
>>>
>>> <property><name>mapred.tasktracker.map.tasks.maximum</name><value>4</value></property>
>>>
>>> <property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value></property>
>>>
>>> <property><name>fs.checkpoint.size</name><value>67108864</value></property>
>>> </configuration>
>>>
>>
>>
>


-- 
   Claudio Martella
   claudio.martella@gmail.com

Re: giraph - problem running shortest path example

Posted by Gil Tselenchuk <gi...@gmail.com>.
Hello Eli Reisman,
and thanks for your help.

I run the "mvn" command like you sad and I have no errors, now I can run
the PageRank example successfully ,so It's good.

But when I run the *SimpleShortestPathsVertex *example I got a new error
that looks like that::
--------------
*hadoop jar giraph/target/Giraph.jar org.apache.giraph.GiraphRunner
org.apache.giraph.examples.SimpleShortestPathsVertex -if
org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat -ip
/user/hduser/shortestPathsInputGraph/ -of
org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat -op
shortestPathsOutputGraph25 -w 3*
*
*
*13/01/05 15:14:51 INFO mapred.JobClient: Running job: job_201301051323_0010
*
*13/01/05 15:14:52 INFO mapred.JobClient:  map 0% reduce 0%*
*13/01/05 15:15:10 INFO mapred.JobClient:  map 25% reduce 0%*
*13/01/05 15:15:13 INFO mapred.JobClient:  map 50% reduce 0%*
*13/01/05 15:15:19 INFO mapred.JobClient:  map 75% reduce 0%*
*13/01/05 15:15:21 INFO mapred.JobClient: Task Id :
attempt_201301051323_0010_m_000000_0, Status : FAILED*
*java.lang.Throwable: Child Error*
* at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)*
*Caused by: java.io.IOException: Task process exit with nonzero status of 1.
*
* at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)*
*
*
*13/01/05 15:15:31 INFO mapred.JobClient:  map 100% reduce 0%*
*13/01/05 15:20:34 INFO mapred.JobClient: Job complete:
job_201301051323_0010*
*13/01/05 15:20:34 INFO mapred.JobClient: Counters: 5*
*13/01/05 15:20:34 INFO mapred.JobClient:   Job Counters *
*13/01/05 15:20:34 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1277806*
*13/01/05 15:20:34 INFO mapred.JobClient:     Total time spent by all
reduces waiting after reserving slots (ms)=0*
*13/01/05 15:20:34 INFO mapred.JobClient:     Total time spent by all maps
waiting after reserving slots (ms)=0*
*13/01/05 15:20:34 INFO mapred.JobClient:     Launched map tasks=5*
*13/01/05 15:20:34 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=4055*

------------
When I google for this problem, I found nothing that help me (it is not "log
files becomes too full" problem. )
What I am doing wrong ???

Thanks
Gil

2012/12/26 Eli Reisman <ap...@gmail.com>

> If you use GiraphRunner, I think you want to use the bin/giraph run
> script. Alternately, use the hadoop jar command as you did but include the
> giraph fat-jar and only name the fully qualified class of the example you
> want to run, instead of inlcuding the path to GiraphRunner before it.
> Either might work. Also, if you run on Hadoop 1.0.3, try to build the
> Giraph fat jar this way:
>
> mvn -Phadoop_1.0 clean package
>
> and see what happens. Good luck!
>
>
> On Sat, Dec 22, 2012 at 5:09 AM, Gil Tselenchuk <gi...@gmail.com> wrote:
>
>> hello friend,
>>
>> I have a problem running giraph example "shortest path" over hadoop.
>> I install:
>> 1. debian OS.
>> 2. hadoop (version 1.0.3) single node, that run "word count" on it
>> (successfully)
>> 3. maven 3.0.4
>> 4. and I use the inputs from the Apache Giraph site.
>>
>> *And when I run the example, there is no output, only logs as follow.*
>> *What can I do??*
>>
>> Thanks, Gil
>>
>>
>> *Terminal output:*
>>
>> hduser@beb-1:/usr/local/giraph-trunk$ *hadoop jar
>> /home/hduser/Desktop/giraph.jar org.apache.giraph.GiraphRunner
>> org.apache.giraph.examples.SimpleShortestPathsVertex -if
>> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat -ip
>> /user/hduser/shortestPathsInputGraph/ -of
>> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat -op
>> shortestPathsOutputGraph20 -w 3*
>>
>> 12/12/05 16:27:33 INFO graph.GiraphJob: run: Since checkpointing is
>> disabled (default), do not allow any task retries (setting
>> mapred.map.max.attempts = 0, old value = 4)
>> 12/12/05 16:27:34 INFO mapred.JobClient: Running job:
>> job_201212051558_0002
>> 12/12/05 16:27:35 INFO mapred.JobClient:  map 0% reduce 0%
>> 12/12/05 16:27:52 INFO mapred.JobClient:  map 25% reduce 0%
>> 12/12/05 16:27:55 INFO mapred.JobClient:  map 50% reduce 0%
>> 12/12/05 16:28:01 INFO mapred.JobClient:  map 75% reduce 0%
>> 12/12/05 16:38:36 INFO mapred.JobClient:  map 50% reduce 0%
>> 12/12/05 16:38:41 INFO mapred.JobClient: Job complete:
>> job_201212051558_0002
>> 12/12/05 16:38:41 INFO mapred.JobClient: Counters: 6
>> 12/12/05 16:38:41 INFO mapred.JobClient:   Job Counters
>> 12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1947284
>> 12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all
>> reduces waiting after reserving slots (ms)=0
>> 12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all maps
>> waiting after reserving slots (ms)=0
>> 12/12/05 16:38:41 INFO mapred.JobClient:     Launched map tasks=4
>> 12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
>> 12/12/05 16:38:41 INFO mapred.JobClient:     Failed map tasks=1
>>
>>
>>
>>
>> -------------------------------------
>> attempt_201212051558_0002_m_000000_0 task_201212051558_0002_m_000000<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000000>
>> beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED
>>
>> java.lang.Throwable: Child Error
>> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
>> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
>> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>>
>>
>> attempt_201212051558_0002_m_000001_0task_201212051558_0002_m_000001<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000001>
>> beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED
>>
>> java.lang.IllegalStateException: run: Caught an unrecoverable exception exists: Failed to check /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions after 3 tries!
>> 	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:768)
>> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
>> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
>> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>> 	at java.security.AccessController.doPrivileged(Native Method)
>> 	at javax.security.auth.Subject.doAs(Subject.java:396)
>> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> 	at org.apache.hadoop.mapred.Child.main(Child.java:249)
>> Caused by: java.lang.IllegalStateException: exists: Failed to check /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions after 3 tries!
>> 	at org.apache.giraph.zk.ZooKeeperExt.exists(ZooKeeperExt.java:369)
>> 	at org.apache.giraph.graph.BspServiceWorker.startSuperstep(BspServiceWorker.java:653)
>> 	at org.apache.giraph.graph.BspServiceWorker.setup(BspServiceWorker.java:452)
>> 	at org.apache.giraph.graph.GraphMapper.map(GraphMapper.java:540)
>> 	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:739)
>> 	... 7 more
>>
>> -------
>> Task attempt_201212051558_0002_m_000001_0 failed to report status for 602 seconds. Killing!
>>
>>
>>
>> shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
>> /_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
>> /history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
>> /job_201212051558_0002_1354717653975_hduser_Giraph%3A+org.apache.giraph.examples.SimpleShortestP
>>
>> Meta VERSION="1" .
>> Job JOBID="job_201212051558_0002" JOBNAME="Giraph:
>> org\.apache\.giraph\.examples\.SimpleShortestPathsVertex" USER="hduser"
>> SUBMIT_TIME="1354717653975"
>> JOBCONF="hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/\.staging/job_201212051558_0002/job\.xml"
>> VIEW_JOB="*" MODIFY_JOB="*" JOB_QUEUE="default" .
>> Job JOBID="job_201212051558_0002" JOB_PRIORITY="NORMAL" .
>> Job JOBID="job_201212051558_0002" LAUNCH_TIME="1354717654079"
>> TOTAL_MAPS="4" TOTAL_REDUCES="0" JOB_STATUS="PREP" .
>> Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
>> START_TIME="1354717655256" SPLITS="" .
>> MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
>> START_TIME="1354717655984"
>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>> HTTP_PORT="50060" .
>> MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
>> TASK_STATUS="SUCCESS" FINISH_TIME="1354717659857"
>> HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="setup"
>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>> snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>> \\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>> snapshot)(493641728)]}" .
>> Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
>> TASK_STATUS="SUCCESS" FINISH_TIME="1354717661263"
>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>> snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>> \\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>> snapshot)(493641728)]}" .
>> Job JOBID="job_201212051558_0002" JOB_STATUS="RUNNING" .
>> Task TASKID="task_201212051558_0002_m_000000" TASK_TYPE="MAP"
>> START_TIME="1354717661265" SPLITS="" .
>> Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
>> START_TIME="1354717664272" SPLITS="" .
>> Task TASKID="task_201212051558_0002_m_000002" TASK_TYPE="MAP"
>> START_TIME="1354717667275" SPLITS="" .
>> Task TASKID="task_201212051558_0002_m_000003" TASK_TYPE="MAP"
>> START_TIME="1354717670282" SPLITS="" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0"
>> START_TIME="1354717661271"
>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>> HTTP_PORT="50060" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0" TASK_STATUS="FAILED"
>> FINISH_TIME="1354717675621" HOSTNAME="beb-1\.bgu\.ac\.il"
>> ERROR="java\.lang\.Throwable: Child Error
>>  at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:271)
>> Caused by: java\.io\.IOException: Task process exit with nonzero status
>> of 1\.
>>  at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:258)
>> " .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0"
>> START_TIME="1354717664274"
>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>> HTTP_PORT="50060" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0" TASK_STATUS="FAILED"
>> FINISH_TIME="1354718312785" HOSTNAME="beb-1\.bgu\.ac\.il"
>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>> exception exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>  at
>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>> at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>  at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>> \.\.\. 7 more
>> ,Task attempt_201212051558_0002_m_000001_0 failed to report status for
>> 602 seconds\. Killing!" .
>> Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
>> TASK_STATUS="FAILED" FINISH_TIME="1354718312785"
>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>> exception exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>  at
>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>> at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>  at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>> \.\.\. 7 more
>> ,Task attempt_201212051558_0002_m_000001_0 failed to report status for
>> 602 seconds\. Killing!" TASK_ATTEMPT_ID="" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0"
>> START_TIME="1354717670284"
>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>> HTTP_PORT="50060" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0" TASK_STATUS="FAILED"
>> FINISH_TIME="1354718312799" HOSTNAME="beb-1\.bgu\.ac\.il"
>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>> exception exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>  at
>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>> at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>  at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>> \.\.\. 7 more
>> ,Task attempt_201212051558_0002_m_000003_0 failed to report status for
>> 602 seconds\. Killing!" .
>> Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
>> START_TIME="1354718315782" SPLITS="" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0"
>> START_TIME="1354717667278"
>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>> HTTP_PORT="50060" .
>> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0" TASK_STATUS="FAILED"
>> FINISH_TIME="1354718315788" HOSTNAME="beb-1\.bgu\.ac\.il"
>> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
>> exception exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
>> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
>> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>>  at java\.security\.AccessController\.doPrivileged(Native Method)
>> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>>  at
>> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
>> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
>> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
>> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
>> after 3 tries!
>>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
>> at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>>  at
>> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
>> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
>> \.\.\. 7 more
>> ,Task attempt_201212051558_0002_m_000002_0 failed to report status for
>> 602 seconds\. Killing!" .
>> MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
>> START_TIME="1354718315790"
>> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
>> HTTP_PORT="50060" .
>> MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
>> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
>> TASK_STATUS="SUCCESS" FINISH_TIME="1354718319665"
>> HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="cleanup"
>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>> snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>> \\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>> snapshot)(565895168)]}" .
>> Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
>> TASK_STATUS="SUCCESS" FINISH_TIME="1354718321788"
>> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
>> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
>> snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
>> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
>> \\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
>> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
>> snapshot)(565895168)]}" .
>> Job JOBID="job_201212051558_0002" FINISH_TIME="1354718321789"
>> JOB_STATUS="FAILED" FINISHED_MAPS="0" FINISHED_REDUCES="0" FAIL_REASON="#
>> of failed Map Tasks exceeded allowed limit\. FailedCount: 1\.
>> LastFailedTask: task_201212051558_0002_m_000001" .
>>
>> shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
>> /_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
>> /history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
>> /job_201212051558_0002_conf.xml
>>
>> <?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
>>
>> <property><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
>> <property><name>mapred.task.cache.levels</name><value>2</value></property>
>>
>> <property><name>giraph.vertexOutputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat</value></property>
>>
>> <property><name>hadoop.tmp.dir</name><value>/app/hadoop/tmp</value></property>
>> <property><name>hadoop.native.lib</name><value>true</value></property>
>>
>> <property><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value></property>
>>
>> <property><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value></property>
>>
>> <property><name>dfs.https.need.client.auth</name><value>false</value></property>
>>
>> <property><name>ipc.client.idlethreshold</name><value>4000</value></property>
>>
>> <property><name>dfs.datanode.data.dir.perm</name><value>755</value></property>
>>
>> <property><name>mapred.system.dir</name><value>${hadoop.tmp.dir}/mapred/system</value></property>
>>
>> <property><name>mapred.job.tracker.persist.jobstatus.hours</name><value>0</value></property>
>> <property><name>dfs.datanode.address</name><value>0.0.0.0:50010
>> </value></property>
>>
>> <property><name>dfs.namenode.logging.level</name><value>info</value></property>
>>
>> <property><name>dfs.block.access.token.enable</name><value>false</value></property>
>>
>> <property><name>io.skip.checksum.errors</name><value>false</value></property>
>> <property><name>fs.default.name
>> </name><value>hdfs://localhost:54310</value></property>
>>
>> <property><name>mapred.cluster.reduce.memory.mb</name><value>-1</value></property>
>> <property><name>mapred.child.tmp</name><value>./tmp</value></property>
>>
>> <property><name>fs.har.impl.disable.cache</name><value>true</value></property>
>>
>> <property><name>dfs.safemode.threshold.pct</name><value>0.999f</value></property>
>>
>> <property><name>mapred.skip.reduce.max.skip.groups</name><value>0</value></property>
>>
>> <property><name>dfs.namenode.handler.count</name><value>10</value></property>
>>
>> <property><name>dfs.blockreport.initialDelay</name><value>0</value></property>
>>
>> <property><name>mapred.heartbeats.in.second</name><value>100</value></property>
>>
>> <property><name>mapred.tasktracker.dns.nameserver</name><value>default</value></property>
>> <property><name>io.sort.factor</name><value>10</value></property>
>> <property><name>mapred.task.timeout</name><value>600000</value></property>
>> <property><name>giraph.maxWorkers</name><value>3</value></property>
>>
>> <property><name>mapred.max.tracker.failures</name><value>4</value></property>
>>
>> <property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
>>
>> <property><name>mapred.job.tracker.jobhistory.lru.cache.size</name><value>5</value></property>
>>
>> <property><name>fs.hdfs.impl</name><value>org.apache.hadoop.hdfs.DistributedFileSystem</value></property>
>>
>> <property><name>mapred.queue.default.acl-administer-jobs</name><value>*</value></property>
>>
>> <property><name>dfs.block.access.key.update.interval</name><value>600</value></property>
>>
>> <property><name>mapred.skip.map.auto.incr.proc.count</name><value>true</value></property>
>>
>> <property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>true</value></property>
>>
>> <property><name>io.mapfile.bloom.size</name><value>1048576</value></property>
>>
>> <property><name>mapreduce.reduce.shuffle.connect.timeout</name><value>180000</value></property>
>>
>> <property><name>dfs.safemode.extension</name><value>30000</value></property>
>>
>> <property><name>mapred.jobtracker.blacklist.fault-timeout-window</name><value>180</value></property>
>>
>> <property><name>tasktracker.http.threads</name><value>40</value></property>
>>
>> <property><name>mapred.job.shuffle.merge.percent</name><value>0.66</value></property>
>>
>> <property><name>mapreduce.inputformat.class</name><value>org.apache.giraph.bsp.BspInputFormat</value></property>
>>
>> <property><name>fs.ftp.impl</name><value>org.apache.hadoop.fs.ftp.FTPFileSystem</value></property>
>> <property><name>user.name</name><value>hduser</value></property>
>>
>> <property><name>mapred.output.compress</name><value>false</value></property>
>> <property><name>io.bytes.per.checksum</name><value>512</value></property>
>>
>> <property><name>mapred.combine.recordsBeforeProgress</name><value>10000</value></property>
>>
>> <property><name>mapred.healthChecker.script.timeout</name><value>600000</value></property>
>>
>> <property><name>topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value></property>
>>
>> <property><name>dfs.https.server.keystore.resource</name><value>ssl-server.xml</value></property>
>>
>> <property><name>mapred.reduce.slowstart.completed.maps</name><value>0.05</value></property>
>>
>> <property><name>mapred.reduce.max.attempts</name><value>4</value></property>
>>
>> <property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
>>
>> <property><name>dfs.block.access.token.lifetime</name><value>600</value></property>
>>
>> <property><name>dfs.name.edits.dir</name><value>${dfs.name.dir}</value></property>
>>
>> <property><name>mapred.skip.map.max.skip.records</name><value>0</value></property>
>>
>> <property><name>mapred.cluster.map.memory.mb</name><value>-1</value></property>
>>
>> <property><name>hadoop.security.group.mapping</name><value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value></property>
>>
>> <property><name>mapred.job.tracker.persist.jobstatus.dir</name><value>/jobtracker/jobsInfo</value></property>
>>
>> <property><name>mapred.jar</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002/job.jar</value></property>
>> <property><name>dfs.block.size</name><value>67108864</value></property>
>>
>> <property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
>> <property><name>job.end.retry.attempts</name><value>0</value></property>
>>
>> <property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
>>
>> <property><name>mapred.local.dir.minspacestart</name><value>0</value></property>
>>
>> <property><name>mapred.output.compression.type</name><value>RECORD</value></property>
>> <property><name>dfs.datanode.ipc.address</name><value>0.0.0.0:50020
>> </value></property>
>> <property><name>dfs.permissions</name><value>true</value></property>
>>
>> <property><name>topology.script.number.args</name><value>100</value></property>
>>
>> <property><name>io.mapfile.bloom.error.rate</name><value>0.005</value></property>
>>
>> <property><name>mapred.cluster.max.reduce.memory.mb</name><value>-1</value></property>
>>
>> <property><name>mapred.max.tracker.blacklists</name><value>4</value></property>
>>
>> <property><name>mapred.task.profile.maps</name><value>0-2</value></property>
>> <property><name>dfs.datanode.https.address</name><value>0.0.0.0:50475
>> </value></property>
>>
>> <property><name>mapred.userlog.retain.hours</name><value>24</value></property>
>> <property><name>dfs.secondary.http.address</name><value>0.0.0.0:50090
>> </value></property>
>> <property><name>dfs.replication.max</name><value>512</value></property>
>>
>> <property><name>mapred.job.tracker.persist.jobstatus.active</name><value>false</value></property>
>>
>> <property><name>hadoop.security.authorization</name><value>false</value></property>
>>
>> <property><name>local.cache.size</name><value>10737418240</value></property>
>>
>> <property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value></property>
>> <property><name>mapred.min.split.size</name><value>0</value></property>
>> <property><name>mapred.map.tasks</name><value>4</value></property>
>>
>> <property><name>mapred.child.java.opts</name><value>-Xmx200m</value></property>
>>
>> <property><name>mapreduce.job.counters.limit</name><value>120</value></property>
>>
>> <property><name>dfs.https.client.keystore.resource</name><value>ssl-client.xml</value></property>
>> <property><name>mapred.job.queue.name
>> </name><value>default</value></property>
>> <property><name>dfs.https.address</name><value>0.0.0.0:50470
>> </value></property>
>>
>> <property><name>mapred.job.tracker.retiredjobs.cache.size</name><value>1000</value></property>
>>
>> <property><name>dfs.balance.bandwidthPerSec</name><value>1048576</value></property>
>>
>> <property><name>ipc.server.listen.queue.size</name><value>128</value></property>
>>
>> <property><name>job.end.retry.interval</name><value>30000</value></property>
>>
>> <property><name>mapred.inmem.merge.threshold</name><value>1000</value></property>
>>
>> <property><name>mapred.skip.attempts.to.start.skipping</name><value>2</value></property>
>>
>> <property><name>mapreduce.tasktracker.outofband.heartbeat.damper</name><value>1000000</value></property>
>>
>> <property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
>> <property><name>mapred.reduce.tasks</name><value>0</value></property>
>>
>> <property><name>mapred.merge.recordsBeforeProgress</name><value>10000</value></property>
>> <property><name>mapred.userlog.limit.kb</name><value>0</value></property>
>>
>> <property><name>mapred.job.reduce.memory.mb</name><value>-1</value></property>
>> <property><name>dfs.max.objects</name><value>0</value></property>
>>
>> <property><name>webinterface.private.actions</name><value>false</value></property>
>>
>> <property><name>hadoop.security.token.service.use_ip</name><value>true</value></property>
>> <property><name>io.sort.spill.percent</name><value>0.80</value></property>
>>
>> <property><name>mapred.job.shuffle.input.buffer.percent</name><value>0.70</value></property>
>> <property><name>mapred.job.name</name><value>Giraph:
>> org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
>>
>> <property><name>dfs.datanode.dns.nameserver</name><value>default</value></property>
>>
>> <property><name>mapred.map.tasks.speculative.execution</name><value>false</value></property>
>>
>> <property><name>hadoop.util.hash.type</name><value>murmur</value></property>
>>
>> <property><name>dfs.blockreport.intervalMsec</name><value>3600000</value></property>
>> <property><name>mapred.map.max.attempts</name><value>0</value></property>
>> <property><name>mapreduce.job.acl-view-job</name><value>
>> </value></property>
>>
>> <property><name>dfs.client.block.write.retries</name><value>3</value></property>
>>
>> <property><name>mapred.job.tracker.handler.count</name><value>10</value></property>
>>
>> <property><name>mapreduce.reduce.shuffle.read.timeout</name><value>180000</value></property>
>>
>> <property><name>mapred.tasktracker.expiry.interval</name><value>600000</value></property>
>> <property><name>dfs.https.enable</name><value>false</value></property>
>>
>> <property><name>mapred.jobtracker.maxtasks.per.job</name><value>-1</value></property>
>>
>> <property><name>mapred.jobtracker.job.history.block.size</name><value>3145728</value></property>
>>
>> <property><name>keep.failed.task.files</name><value>false</value></property>
>>
>> <property><name>mapreduce.outputformat.class</name><value>org.apache.giraph.bsp.BspOutputFormat</value></property>
>>
>> <property><name>dfs.datanode.failed.volumes.tolerated</name><value>0</value></property>
>>
>> <property><name>ipc.client.tcpnodelay</name><value>false</value></property>
>>
>> <property><name>mapred.task.profile.reduces</name><value>0-2</value></property>
>>
>> <property><name>mapred.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
>> <property><name>io.map.index.skip</name><value>0</value></property>
>>
>> <property><name>mapred.working.dir</name><value>hdfs://localhost:54310/user/hduser</value></property>
>>
>> <property><name>ipc.server.tcpnodelay</name><value>false</value></property>
>>
>> <property><name>mapred.jobtracker.blacklist.fault-bucket-width</name><value>15</value></property>
>>
>> <property><name>dfs.namenode.delegation.key.update-interval</name><value>86400000</value></property>
>>
>> <property><name>mapred.used.genericoptionsparser</name><value>true</value></property>
>> <property><name>mapred.mapper.new-api</name><value>true</value></property>
>>
>> <property><name>mapred.job.map.memory.mb</name><value>-1</value></property>
>>
>> <property><name>dfs.default.chunk.view.size</name><value>32768</value></property>
>>
>> <property><name>hadoop.logfile.size</name><value>10000000</value></property>
>>
>> <property><name>mapred.reduce.tasks.speculative.execution</name><value>true</value></property>
>>
>> <property><name>mapreduce.job.dir</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002</value></property>
>>
>> <property><name>mapreduce.tasktracker.outofband.heartbeat</name><value>false</value></property>
>>
>> <property><name>mapreduce.reduce.input.limit</name><value>-1</value></property>
>> <property><name>dfs.datanode.du.reserved</name><value>0</value></property>
>>
>> <property><name>hadoop.security.authentication</name><value>simple</value></property>
>> <property><name>fs.checkpoint.period</name><value>3600</value></property>
>>
>> <property><name>dfs.web.ugi</name><value>webuser,webgroup</value></property>
>>
>> <property><name>mapred.job.reuse.jvm.num.tasks</name><value>1</value></property>
>>
>> <property><name>mapred.jobtracker.completeuserjobs.maximum</name><value>100</value></property>
>> <property><name>dfs.df.interval</name><value>60000</value></property>
>>
>> <property><name>giraph.vertexClass</name><value>org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
>>
>> <property><name>dfs.data.dir</name><value>${hadoop.tmp.dir}/dfs/data</value></property>
>>
>> <property><name>mapred.task.tracker.task-controller</name><value>org.apache.hadoop.mapred.DefaultTaskController</value></property>
>> <property><name>giraph.minWorkers</name><value>3</value></property>
>> <property><name>fs.s3.maxRetries</name><value>4</value></property>
>>
>> <property><name>dfs.datanode.dns.interface</name><value>default</value></property>
>>
>> <property><name>mapred.cluster.max.map.memory.mb</name><value>-1</value></property>
>> <property><name>dfs.support.append</name><value>false</value></property>
>>
>> <property><name>mapreduce.reduce.shuffle.maxfetchfailures</name><value>10</value></property>
>> <property><name>mapreduce.job.acl-modify-job</name><value>
>> </value></property>
>>
>> <property><name>dfs.permissions.supergroup</name><value>supergroup</value></property>
>>
>> <property><name>mapred.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value></property>
>>
>> <property><name>fs.hftp.impl</name><value>org.apache.hadoop.hdfs.HftpFileSystem</value></property>
>> <property><name>fs.trash.interval</name><value>0</value></property>
>> <property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
>> <property><name>dfs.replication.min</name><value>1</value></property>
>>
>> <property><name>mapred.submit.replication</name><value>10</value></property>
>>
>> <property><name>fs.har.impl</name><value>org.apache.hadoop.fs.HarFileSystem</value></property>
>>
>> <property><name>mapred.map.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
>>
>> <property><name>mapred.tasktracker.dns.interface</name><value>default</value></property>
>>
>> <property><name>dfs.namenode.decommission.interval</name><value>30</value></property>
>> <property><name>dfs.http.address</name><value>0.0.0.0:50070
>> </value></property>
>> <property><name>dfs.heartbeat.interval</name><value>3</value></property>
>>
>> <property><name>mapred.job.tracker</name><value>localhost:54311</value></property>
>> <property><name>mapreduce.job.submithost</name><value>beb-1.bgu.ac.il
>> </value></property>
>>
>> <property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
>>
>> <property><name>giraph.vertexInputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat</value></property>
>>
>> <property><name>dfs.name.dir</name><value>${hadoop.tmp.dir}/dfs/name</value></property>
>>
>> <property><name>mapred.line.input.format.linespermap</name><value>1</value></property>
>>
>> <property><name>mapred.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value></property>
>> <property><name>dfs.datanode.http.address</name><value>0.0.0.0:50075
>> </value></property>
>>
>> <property><name>fs.webhdfs.impl</name><value>org.apache.hadoop.hdfs.web.WebHdfsFileSystem</value></property>
>>
>> <property><name>mapred.local.dir.minspacekill</name><value>0</value></property>
>> <property><name>dfs.replication.interval</name><value>3</value></property>
>>
>> <property><name>io.sort.record.percent</name><value>0.05</value></property>
>>
>> <property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
>>
>> <property><name>mapred.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value></property>
>>
>> <property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>2</value></property>
>>
>> <property><name>mapreduce.job.user.classpath.first</name><value>true</value></property>
>> <property><name>dfs.replication</name><value>1</value></property>
>>
>> <property><name>fs.checkpoint.edits.dir</name><value>${fs.checkpoint.dir}</value></property>
>>
>> <property><name>mapred.tasktracker.tasks.sleeptime-before-sigkill</name><value>5000</value></property>
>>
>> <property><name>mapred.job.reduce.input.buffer.percent</name><value>0.0</value></property>
>>
>> <property><name>mapred.tasktracker.indexcache.mb</name><value>10</value></property>
>>
>> <property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value></property>
>> <property><name>hadoop.logfile.count</name><value>10</value></property>
>>
>> <property><name>mapred.skip.reduce.auto.incr.proc.count</name><value>true</value></property>
>>
>> <property><name>mapreduce.job.submithostaddress</name><value>127.0.1.1</value></property>
>>
>> <property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
>> <property><name>fs.s3.block.size</name><value>67108864</value></property>
>>
>> <property><name>mapred.tasktracker.taskmemorymanager.monitoring-interval</name><value>5000</value></property>
>>
>> <property><name>giraph.minPercentResponded</name><value>100.0</value></property>
>>
>> <property><name>mapred.queue.default.state</name><value>RUNNING</value></property>
>> <property><name>mapred.acls.enabled</name><value>false</value></property>
>>
>> <property><name>mapreduce.jobtracker.staging.root.dir</name><value>${hadoop.tmp.dir}/mapred/staging</value></property>
>> <property><name>mapred.queue.names</name><value>default</value></property>
>>
>> <property><name>dfs.access.time.precision</name><value>3600000</value></property>
>>
>> <property><name>fs.hsftp.impl</name><value>org.apache.hadoop.hdfs.HsftpFileSystem</value></property>
>> <property><name>mapred.task.tracker.http.address</name><value>
>> 0.0.0.0:50060</value></property>
>>
>> <property><name>mapred.reduce.parallel.copies</name><value>5</value></property>
>>
>> <property><name>io.seqfile.lazydecompress</name><value>true</value></property>
>>
>> <property><name>mapred.output.dir</name><value>shortestPathsOutputGraph20</value></property>
>> <property><name>io.sort.mb</name><value>100</value></property>
>>
>> <property><name>ipc.client.connection.maxidletime</name><value>10000</value></property>
>>
>> <property><name>mapred.compress.map.output</name><value>false</value></property>
>>
>> <property><name>hadoop.security.uid.cache.secs</name><value>14400</value></property>
>> <property><name>mapred.task.tracker.report.address</name><value>
>> 127.0.0.1:0</value></property>
>>
>> <property><name>mapred.healthChecker.interval</name><value>60000</value></property>
>> <property><name>ipc.client.kill.max</name><value>10</value></property>
>>
>> <property><name>ipc.client.connect.max.retries</name><value>10</value></property>
>> <property><name>ipc.ping.interval</name><value>300000</value></property>
>>
>> <property><name>mapreduce.user.classpath.first</name><value>true</value></property>
>>
>> <property><name>mapreduce.map.class</name><value>org.apache.giraph.graph.GraphMapper</value></property>
>>
>> <property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
>>
>> <property><name>mapred.user.jobconf.limit</name><value>5242880</value></property>
>>
>> <property><name>mapred.input.dir</name><value>hdfs://localhost:54310/user/hduser/shortestPathsInputGraph</value></property>
>> <property><name>mapred.job.tracker.http.address</name><value>
>> 0.0.0.0:50030</value></property>
>> <property><name>io.file.buffer.size</name><value>4096</value></property>
>>
>> <property><name>mapred.jobtracker.restart.recover</name><value>false</value></property>
>>
>> <property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization</value></property>
>>
>> <property><name>dfs.datanode.handler.count</name><value>3</value></property>
>> <property><name>mapred.task.profile</name><value>false</value></property>
>>
>> <property><name>dfs.replication.considerLoad</name><value>true</value></property>
>>
>> <property><name>jobclient.output.filter</name><value>FAILED</value></property>
>>
>> <property><name>dfs.namenode.delegation.token.max-lifetime</name><value>604800000</value></property>
>>
>> <property><name>mapred.tasktracker.map.tasks.maximum</name><value>4</value></property>
>>
>> <property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value></property>
>>
>> <property><name>fs.checkpoint.size</name><value>67108864</value></property>
>> </configuration>
>>
>
>

Re: giraph - problem running shortest path example

Posted by Eli Reisman <ap...@gmail.com>.
If you use GiraphRunner, I think you want to use the bin/giraph run script.
Alternately, use the hadoop jar command as you did but include the giraph
fat-jar and only name the fully qualified class of the example you want to
run, instead of inlcuding the path to GiraphRunner before it. Either might
work. Also, if you run on Hadoop 1.0.3, try to build the Giraph fat jar
this way:

mvn -Phadoop_1.0 clean package

and see what happens. Good luck!

On Sat, Dec 22, 2012 at 5:09 AM, Gil Tselenchuk <gi...@gmail.com> wrote:

> hello friend,
>
> I have a problem running giraph example "shortest path" over hadoop.
> I install:
> 1. debian OS.
> 2. hadoop (version 1.0.3) single node, that run "word count" on it
> (successfully)
> 3. maven 3.0.4
> 4. and I use the inputs from the Apache Giraph site.
>
> *And when I run the example, there is no output, only logs as follow.*
> *What can I do??*
>
> Thanks, Gil
>
>
> *Terminal output:*
>
> hduser@beb-1:/usr/local/giraph-trunk$ *hadoop jar
> /home/hduser/Desktop/giraph.jar org.apache.giraph.GiraphRunner
> org.apache.giraph.examples.SimpleShortestPathsVertex -if
> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat -ip
> /user/hduser/shortestPathsInputGraph/ -of
> org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat -op
> shortestPathsOutputGraph20 -w 3*
>
> 12/12/05 16:27:33 INFO graph.GiraphJob: run: Since checkpointing is
> disabled (default), do not allow any task retries (setting
> mapred.map.max.attempts = 0, old value = 4)
> 12/12/05 16:27:34 INFO mapred.JobClient: Running job: job_201212051558_0002
> 12/12/05 16:27:35 INFO mapred.JobClient:  map 0% reduce 0%
> 12/12/05 16:27:52 INFO mapred.JobClient:  map 25% reduce 0%
> 12/12/05 16:27:55 INFO mapred.JobClient:  map 50% reduce 0%
> 12/12/05 16:28:01 INFO mapred.JobClient:  map 75% reduce 0%
> 12/12/05 16:38:36 INFO mapred.JobClient:  map 50% reduce 0%
> 12/12/05 16:38:41 INFO mapred.JobClient: Job complete:
> job_201212051558_0002
> 12/12/05 16:38:41 INFO mapred.JobClient: Counters: 6
> 12/12/05 16:38:41 INFO mapred.JobClient:   Job Counters
> 12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=1947284
> 12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0
> 12/12/05 16:38:41 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0
> 12/12/05 16:38:41 INFO mapred.JobClient:     Launched map tasks=4
> 12/12/05 16:38:41 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
> 12/12/05 16:38:41 INFO mapred.JobClient:     Failed map tasks=1
>
>
>
>
> -------------------------------------
> attempt_201212051558_0002_m_000000_0 task_201212051558_0002_m_000000<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000000>
> beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED
>
> java.lang.Throwable: Child Error
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:271)
> Caused by: java.io.IOException: Task process exit with nonzero status of 1.
> 	at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:258)
>
>
> attempt_201212051558_0002_m_000001_0task_201212051558_0002_m_000001<http://localhost:50030/taskdetails.jsp?tipid=task_201212051558_0002_m_000001>
> beb-1.bgu.ac.il <http://beb-1.bgu.ac.il:50060/>FAILED
>
> java.lang.IllegalStateException: run: Caught an unrecoverable exception exists: Failed to check /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions after 3 tries!
> 	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:768)
> 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> 	at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> 	at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: java.lang.IllegalStateException: exists: Failed to check /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions after 3 tries!
> 	at org.apache.giraph.zk.ZooKeeperExt.exists(ZooKeeperExt.java:369)
> 	at org.apache.giraph.graph.BspServiceWorker.startSuperstep(BspServiceWorker.java:653)
> 	at org.apache.giraph.graph.BspServiceWorker.setup(BspServiceWorker.java:452)
> 	at org.apache.giraph.graph.GraphMapper.map(GraphMapper.java:540)
> 	at org.apache.giraph.graph.GraphMapper.run(GraphMapper.java:739)
> 	... 7 more
>
> -------
> Task attempt_201212051558_0002_m_000001_0 failed to report status for 602 seconds. Killing!
>
>
>
> shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
> /_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
> /history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
> /job_201212051558_0002_1354717653975_hduser_Giraph%3A+org.apache.giraph.examples.SimpleShortestP
>
> Meta VERSION="1" .
> Job JOBID="job_201212051558_0002" JOBNAME="Giraph:
> org\.apache\.giraph\.examples\.SimpleShortestPathsVertex" USER="hduser"
> SUBMIT_TIME="1354717653975"
> JOBCONF="hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/\.staging/job_201212051558_0002/job\.xml"
> VIEW_JOB="*" MODIFY_JOB="*" JOB_QUEUE="default" .
> Job JOBID="job_201212051558_0002" JOB_PRIORITY="NORMAL" .
> Job JOBID="job_201212051558_0002" LAUNCH_TIME="1354717654079"
> TOTAL_MAPS="4" TOTAL_REDUCES="0" JOB_STATUS="PREP" .
> Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
> START_TIME="1354717655256" SPLITS="" .
> MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
> START_TIME="1354717655984"
> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
> HTTP_PORT="50060" .
> MapAttempt TASK_TYPE="SETUP" TASKID="task_201212051558_0002_m_000005"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000005_0"
> TASK_STATUS="SUCCESS" FINISH_TIME="1354717659857"
> HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="setup"
> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
> snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
> \\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
> snapshot)(493641728)]}" .
> Task TASKID="task_201212051558_0002_m_000005" TASK_TYPE="SETUP"
> TASK_STATUS="SUCCESS" FINISH_TIME="1354717661263"
> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
> snapshot)(67006464)][(SPILLED_RECORDS)(Spilled
> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
> \\(ms\\))(80)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
> snapshot)(493641728)]}" .
> Job JOBID="job_201212051558_0002" JOB_STATUS="RUNNING" .
> Task TASKID="task_201212051558_0002_m_000000" TASK_TYPE="MAP"
> START_TIME="1354717661265" SPLITS="" .
> Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
> START_TIME="1354717664272" SPLITS="" .
> Task TASKID="task_201212051558_0002_m_000002" TASK_TYPE="MAP"
> START_TIME="1354717667275" SPLITS="" .
> Task TASKID="task_201212051558_0002_m_000003" TASK_TYPE="MAP"
> START_TIME="1354717670282" SPLITS="" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0"
> START_TIME="1354717661271"
> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
> HTTP_PORT="50060" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000000"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000000_0" TASK_STATUS="FAILED"
> FINISH_TIME="1354717675621" HOSTNAME="beb-1\.bgu\.ac\.il"
> ERROR="java\.lang\.Throwable: Child Error
>  at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:271)
> Caused by: java\.io\.IOException: Task process exit with nonzero status of
> 1\.
>  at org\.apache\.hadoop\.mapred\.TaskRunner\.run(TaskRunner\.java:258)
> " .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0"
> START_TIME="1354717664274"
> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
> HTTP_PORT="50060" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000001"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000001_0" TASK_STATUS="FAILED"
> FINISH_TIME="1354718312785" HOSTNAME="beb-1\.bgu\.ac\.il"
> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
> exception exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>  at java\.security\.AccessController\.doPrivileged(Native Method)
> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>  at
> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
> at
> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>  at
> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
> \.\.\. 7 more
> ,Task attempt_201212051558_0002_m_000001_0 failed to report status for 602
> seconds\. Killing!" .
> Task TASKID="task_201212051558_0002_m_000001" TASK_TYPE="MAP"
> TASK_STATUS="FAILED" FINISH_TIME="1354718312785"
> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
> exception exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>  at java\.security\.AccessController\.doPrivileged(Native Method)
> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>  at
> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
> at
> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>  at
> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
> \.\.\. 7 more
> ,Task attempt_201212051558_0002_m_000001_0 failed to report status for 602
> seconds\. Killing!" TASK_ATTEMPT_ID="" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0"
> START_TIME="1354717670284"
> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
> HTTP_PORT="50060" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000003"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000003_0" TASK_STATUS="FAILED"
> FINISH_TIME="1354718312799" HOSTNAME="beb-1\.bgu\.ac\.il"
> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
> exception exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>  at java\.security\.AccessController\.doPrivileged(Native Method)
> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>  at
> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
> at
> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>  at
> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
> \.\.\. 7 more
> ,Task attempt_201212051558_0002_m_000003_0 failed to report status for 602
> seconds\. Killing!" .
> Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
> START_TIME="1354718315782" SPLITS="" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0"
> START_TIME="1354717667278"
> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
> HTTP_PORT="50060" .
> MapAttempt TASK_TYPE="MAP" TASKID="task_201212051558_0002_m_000002"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000002_0" TASK_STATUS="FAILED"
> FINISH_TIME="1354718315788" HOSTNAME="beb-1\.bgu\.ac\.il"
> ERROR="java\.lang\.IllegalStateException: run: Caught an unrecoverable
> exception exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:768)
> at org\.apache\.hadoop\.mapred\.MapTask\.runNewMapper(MapTask\.java:764)
>  at org\.apache\.hadoop\.mapred\.MapTask\.run(MapTask\.java:370)
> at org\.apache\.hadoop\.mapred\.Child$4\.run(Child\.java:255)
>  at java\.security\.AccessController\.doPrivileged(Native Method)
> at javax\.security\.auth\.Subject\.doAs(Subject\.java:396)
>  at
> org\.apache\.hadoop\.security\.UserGroupInformation\.doAs(UserGroupInformation\.java:1121)
> at org\.apache\.hadoop\.mapred\.Child\.main(Child\.java:249)
> Caused by: java\.lang\.IllegalStateException: exists: Failed to check
> /_hadoopBsp/job_201212051558_0002/_applicationAttemptsDir/0/_superstepDir/-1/_addressesAndPartitions
> after 3 tries!
>  at org\.apache\.giraph\.zk\.ZooKeeperExt\.exists(ZooKeeperExt\.java:369)
> at
> org\.apache\.giraph\.graph\.BspServiceWorker\.startSuperstep(BspServiceWorker\.java:653)
>  at
> org\.apache\.giraph\.graph\.BspServiceWorker\.setup(BspServiceWorker\.java:452)
> at org\.apache\.giraph\.graph\.GraphMapper\.map(GraphMapper\.java:540)
>  at org\.apache\.giraph\.graph\.GraphMapper\.run(GraphMapper\.java:739)
> \.\.\. 7 more
> ,Task attempt_201212051558_0002_m_000002_0 failed to report status for 602
> seconds\. Killing!" .
> MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
> START_TIME="1354718315790"
> TRACKER_NAME="tracker_beb-1\.bgu\.ac\.il:localhost/127\.0\.0\.1:49397"
> HTTP_PORT="50060" .
> MapAttempt TASK_TYPE="CLEANUP" TASKID="task_201212051558_0002_m_000004"
> TASK_ATTEMPT_ID="attempt_201212051558_0002_m_000004_0"
> TASK_STATUS="SUCCESS" FINISH_TIME="1354718319665"
> HOSTNAME="/default-rack/beb-1\.bgu\.ac\.il" STATE_STRING="cleanup"
> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
> snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
> \\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
> snapshot)(565895168)]}" .
> Task TASKID="task_201212051558_0002_m_000004" TASK_TYPE="CLEANUP"
> TASK_STATUS="SUCCESS" FINISH_TIME="1354718321788"
> COUNTERS="{(FileSystemCounters)(FileSystemCounters)[(FILE_BYTES_WRITTEN)(FILE_BYTES_WRITTEN)(22115)]}{(org\.apache\.hadoop\.mapred\.Task$Counter)(Map-Reduce
> Framework)[(PHYSICAL_MEMORY_BYTES)(Physical memory \\(bytes\\)
> snapshot)(65875968)][(SPILLED_RECORDS)(Spilled
> Records)(0)][(CPU_MILLISECONDS)(CPU time spent
> \\(ms\\))(70)][(COMMITTED_HEAP_BYTES)(Total committed heap usage
> \\(bytes\\))(59768832)][(VIRTUAL_MEMORY_BYTES)(Virtual memory \\(bytes\\)
> snapshot)(565895168)]}" .
> Job JOBID="job_201212051558_0002" FINISH_TIME="1354718321789"
> JOB_STATUS="FAILED" FINISHED_MAPS="0" FINISHED_REDUCES="0" FAIL_REASON="#
> of failed Map Tasks exceeded allowed limit\. FailedCount: 1\.
> LastFailedTask: task_201212051558_0002_m_000001" .
>
> shortestPathsOutputGraph20<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20&namenodeInfoPort=50070>
> /_logs<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs&namenodeInfoPort=50070>
> /history<http://localhost:50075/browseDirectory.jsp?dir=/user/hduser/shortestPathsOutputGraph20/_logs/history&namenodeInfoPort=50070>
> /job_201212051558_0002_conf.xml
>
> <?xml version="1.0" encoding="UTF-8" standalone="no"?><configuration>
>
> <property><name>fs.s3n.impl</name><value>org.apache.hadoop.fs.s3native.NativeS3FileSystem</value></property>
> <property><name>mapred.task.cache.levels</name><value>2</value></property>
>
> <property><name>giraph.vertexOutputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexOutputFormat</value></property>
>
> <property><name>hadoop.tmp.dir</name><value>/app/hadoop/tmp</value></property>
> <property><name>hadoop.native.lib</name><value>true</value></property>
>
> <property><name>map.sort.class</name><value>org.apache.hadoop.util.QuickSort</value></property>
>
> <property><name>dfs.namenode.decommission.nodes.per.interval</name><value>5</value></property>
>
> <property><name>dfs.https.need.client.auth</name><value>false</value></property>
>
> <property><name>ipc.client.idlethreshold</name><value>4000</value></property>
>
> <property><name>dfs.datanode.data.dir.perm</name><value>755</value></property>
>
> <property><name>mapred.system.dir</name><value>${hadoop.tmp.dir}/mapred/system</value></property>
>
> <property><name>mapred.job.tracker.persist.jobstatus.hours</name><value>0</value></property>
> <property><name>dfs.datanode.address</name><value>0.0.0.0:50010
> </value></property>
>
> <property><name>dfs.namenode.logging.level</name><value>info</value></property>
>
> <property><name>dfs.block.access.token.enable</name><value>false</value></property>
>
> <property><name>io.skip.checksum.errors</name><value>false</value></property>
> <property><name>fs.default.name
> </name><value>hdfs://localhost:54310</value></property>
>
> <property><name>mapred.cluster.reduce.memory.mb</name><value>-1</value></property>
> <property><name>mapred.child.tmp</name><value>./tmp</value></property>
>
> <property><name>fs.har.impl.disable.cache</name><value>true</value></property>
>
> <property><name>dfs.safemode.threshold.pct</name><value>0.999f</value></property>
>
> <property><name>mapred.skip.reduce.max.skip.groups</name><value>0</value></property>
>
> <property><name>dfs.namenode.handler.count</name><value>10</value></property>
>
> <property><name>dfs.blockreport.initialDelay</name><value>0</value></property>
>
> <property><name>mapred.heartbeats.in.second</name><value>100</value></property>
>
> <property><name>mapred.tasktracker.dns.nameserver</name><value>default</value></property>
> <property><name>io.sort.factor</name><value>10</value></property>
> <property><name>mapred.task.timeout</name><value>600000</value></property>
> <property><name>giraph.maxWorkers</name><value>3</value></property>
>
> <property><name>mapred.max.tracker.failures</name><value>4</value></property>
>
> <property><name>hadoop.rpc.socket.factory.class.default</name><value>org.apache.hadoop.net.StandardSocketFactory</value></property>
>
> <property><name>mapred.job.tracker.jobhistory.lru.cache.size</name><value>5</value></property>
>
> <property><name>fs.hdfs.impl</name><value>org.apache.hadoop.hdfs.DistributedFileSystem</value></property>
>
> <property><name>mapred.queue.default.acl-administer-jobs</name><value>*</value></property>
>
> <property><name>dfs.block.access.key.update.interval</name><value>600</value></property>
>
> <property><name>mapred.skip.map.auto.incr.proc.count</name><value>true</value></property>
>
> <property><name>mapreduce.job.complete.cancel.delegation.tokens</name><value>true</value></property>
>
> <property><name>io.mapfile.bloom.size</name><value>1048576</value></property>
>
> <property><name>mapreduce.reduce.shuffle.connect.timeout</name><value>180000</value></property>
>
> <property><name>dfs.safemode.extension</name><value>30000</value></property>
>
> <property><name>mapred.jobtracker.blacklist.fault-timeout-window</name><value>180</value></property>
> <property><name>tasktracker.http.threads</name><value>40</value></property>
>
> <property><name>mapred.job.shuffle.merge.percent</name><value>0.66</value></property>
>
> <property><name>mapreduce.inputformat.class</name><value>org.apache.giraph.bsp.BspInputFormat</value></property>
>
> <property><name>fs.ftp.impl</name><value>org.apache.hadoop.fs.ftp.FTPFileSystem</value></property>
> <property><name>user.name</name><value>hduser</value></property>
>
> <property><name>mapred.output.compress</name><value>false</value></property>
> <property><name>io.bytes.per.checksum</name><value>512</value></property>
>
> <property><name>mapred.combine.recordsBeforeProgress</name><value>10000</value></property>
>
> <property><name>mapred.healthChecker.script.timeout</name><value>600000</value></property>
>
> <property><name>topology.node.switch.mapping.impl</name><value>org.apache.hadoop.net.ScriptBasedMapping</value></property>
>
> <property><name>dfs.https.server.keystore.resource</name><value>ssl-server.xml</value></property>
>
> <property><name>mapred.reduce.slowstart.completed.maps</name><value>0.05</value></property>
>
> <property><name>mapred.reduce.max.attempts</name><value>4</value></property>
>
> <property><name>fs.ramfs.impl</name><value>org.apache.hadoop.fs.InMemoryFileSystem</value></property>
>
> <property><name>dfs.block.access.token.lifetime</name><value>600</value></property>
>
> <property><name>dfs.name.edits.dir</name><value>${dfs.name.dir}</value></property>
>
> <property><name>mapred.skip.map.max.skip.records</name><value>0</value></property>
>
> <property><name>mapred.cluster.map.memory.mb</name><value>-1</value></property>
>
> <property><name>hadoop.security.group.mapping</name><value>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</value></property>
>
> <property><name>mapred.job.tracker.persist.jobstatus.dir</name><value>/jobtracker/jobsInfo</value></property>
>
> <property><name>mapred.jar</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002/job.jar</value></property>
> <property><name>dfs.block.size</name><value>67108864</value></property>
>
> <property><name>fs.s3.buffer.dir</name><value>${hadoop.tmp.dir}/s3</value></property>
> <property><name>job.end.retry.attempts</name><value>0</value></property>
>
> <property><name>fs.file.impl</name><value>org.apache.hadoop.fs.LocalFileSystem</value></property>
>
> <property><name>mapred.local.dir.minspacestart</name><value>0</value></property>
>
> <property><name>mapred.output.compression.type</name><value>RECORD</value></property>
> <property><name>dfs.datanode.ipc.address</name><value>0.0.0.0:50020
> </value></property>
> <property><name>dfs.permissions</name><value>true</value></property>
>
> <property><name>topology.script.number.args</name><value>100</value></property>
>
> <property><name>io.mapfile.bloom.error.rate</name><value>0.005</value></property>
>
> <property><name>mapred.cluster.max.reduce.memory.mb</name><value>-1</value></property>
>
> <property><name>mapred.max.tracker.blacklists</name><value>4</value></property>
>
> <property><name>mapred.task.profile.maps</name><value>0-2</value></property>
> <property><name>dfs.datanode.https.address</name><value>0.0.0.0:50475
> </value></property>
>
> <property><name>mapred.userlog.retain.hours</name><value>24</value></property>
> <property><name>dfs.secondary.http.address</name><value>0.0.0.0:50090
> </value></property>
> <property><name>dfs.replication.max</name><value>512</value></property>
>
> <property><name>mapred.job.tracker.persist.jobstatus.active</name><value>false</value></property>
>
> <property><name>hadoop.security.authorization</name><value>false</value></property>
>
> <property><name>local.cache.size</name><value>10737418240</value></property>
>
> <property><name>dfs.namenode.delegation.token.renew-interval</name><value>86400000</value></property>
> <property><name>mapred.min.split.size</name><value>0</value></property>
> <property><name>mapred.map.tasks</name><value>4</value></property>
>
> <property><name>mapred.child.java.opts</name><value>-Xmx200m</value></property>
>
> <property><name>mapreduce.job.counters.limit</name><value>120</value></property>
>
> <property><name>dfs.https.client.keystore.resource</name><value>ssl-client.xml</value></property>
> <property><name>mapred.job.queue.name
> </name><value>default</value></property>
> <property><name>dfs.https.address</name><value>0.0.0.0:50470
> </value></property>
>
> <property><name>mapred.job.tracker.retiredjobs.cache.size</name><value>1000</value></property>
>
> <property><name>dfs.balance.bandwidthPerSec</name><value>1048576</value></property>
>
> <property><name>ipc.server.listen.queue.size</name><value>128</value></property>
>
> <property><name>job.end.retry.interval</name><value>30000</value></property>
>
> <property><name>mapred.inmem.merge.threshold</name><value>1000</value></property>
>
> <property><name>mapred.skip.attempts.to.start.skipping</name><value>2</value></property>
>
> <property><name>mapreduce.tasktracker.outofband.heartbeat.damper</name><value>1000000</value></property>
>
> <property><name>fs.checkpoint.dir</name><value>${hadoop.tmp.dir}/dfs/namesecondary</value></property>
> <property><name>mapred.reduce.tasks</name><value>0</value></property>
>
> <property><name>mapred.merge.recordsBeforeProgress</name><value>10000</value></property>
> <property><name>mapred.userlog.limit.kb</name><value>0</value></property>
>
> <property><name>mapred.job.reduce.memory.mb</name><value>-1</value></property>
> <property><name>dfs.max.objects</name><value>0</value></property>
>
> <property><name>webinterface.private.actions</name><value>false</value></property>
>
> <property><name>hadoop.security.token.service.use_ip</name><value>true</value></property>
> <property><name>io.sort.spill.percent</name><value>0.80</value></property>
>
> <property><name>mapred.job.shuffle.input.buffer.percent</name><value>0.70</value></property>
> <property><name>mapred.job.name</name><value>Giraph:
> org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
>
> <property><name>dfs.datanode.dns.nameserver</name><value>default</value></property>
>
> <property><name>mapred.map.tasks.speculative.execution</name><value>false</value></property>
>
> <property><name>hadoop.util.hash.type</name><value>murmur</value></property>
>
> <property><name>dfs.blockreport.intervalMsec</name><value>3600000</value></property>
> <property><name>mapred.map.max.attempts</name><value>0</value></property>
> <property><name>mapreduce.job.acl-view-job</name><value>
> </value></property>
>
> <property><name>dfs.client.block.write.retries</name><value>3</value></property>
>
> <property><name>mapred.job.tracker.handler.count</name><value>10</value></property>
>
> <property><name>mapreduce.reduce.shuffle.read.timeout</name><value>180000</value></property>
>
> <property><name>mapred.tasktracker.expiry.interval</name><value>600000</value></property>
> <property><name>dfs.https.enable</name><value>false</value></property>
>
> <property><name>mapred.jobtracker.maxtasks.per.job</name><value>-1</value></property>
>
> <property><name>mapred.jobtracker.job.history.block.size</name><value>3145728</value></property>
>
> <property><name>keep.failed.task.files</name><value>false</value></property>
>
> <property><name>mapreduce.outputformat.class</name><value>org.apache.giraph.bsp.BspOutputFormat</value></property>
>
> <property><name>dfs.datanode.failed.volumes.tolerated</name><value>0</value></property>
> <property><name>ipc.client.tcpnodelay</name><value>false</value></property>
>
> <property><name>mapred.task.profile.reduces</name><value>0-2</value></property>
>
> <property><name>mapred.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
> <property><name>io.map.index.skip</name><value>0</value></property>
>
> <property><name>mapred.working.dir</name><value>hdfs://localhost:54310/user/hduser</value></property>
> <property><name>ipc.server.tcpnodelay</name><value>false</value></property>
>
> <property><name>mapred.jobtracker.blacklist.fault-bucket-width</name><value>15</value></property>
>
> <property><name>dfs.namenode.delegation.key.update-interval</name><value>86400000</value></property>
>
> <property><name>mapred.used.genericoptionsparser</name><value>true</value></property>
> <property><name>mapred.mapper.new-api</name><value>true</value></property>
> <property><name>mapred.job.map.memory.mb</name><value>-1</value></property>
>
> <property><name>dfs.default.chunk.view.size</name><value>32768</value></property>
>
> <property><name>hadoop.logfile.size</name><value>10000000</value></property>
>
> <property><name>mapred.reduce.tasks.speculative.execution</name><value>true</value></property>
>
> <property><name>mapreduce.job.dir</name><value>hdfs://localhost:54310/app/hadoop/tmp/mapred/staging/hduser/.staging/job_201212051558_0002</value></property>
>
> <property><name>mapreduce.tasktracker.outofband.heartbeat</name><value>false</value></property>
>
> <property><name>mapreduce.reduce.input.limit</name><value>-1</value></property>
> <property><name>dfs.datanode.du.reserved</name><value>0</value></property>
>
> <property><name>hadoop.security.authentication</name><value>simple</value></property>
> <property><name>fs.checkpoint.period</name><value>3600</value></property>
>
> <property><name>dfs.web.ugi</name><value>webuser,webgroup</value></property>
>
> <property><name>mapred.job.reuse.jvm.num.tasks</name><value>1</value></property>
>
> <property><name>mapred.jobtracker.completeuserjobs.maximum</name><value>100</value></property>
> <property><name>dfs.df.interval</name><value>60000</value></property>
>
> <property><name>giraph.vertexClass</name><value>org.apache.giraph.examples.SimpleShortestPathsVertex</value></property>
>
> <property><name>dfs.data.dir</name><value>${hadoop.tmp.dir}/dfs/data</value></property>
>
> <property><name>mapred.task.tracker.task-controller</name><value>org.apache.hadoop.mapred.DefaultTaskController</value></property>
> <property><name>giraph.minWorkers</name><value>3</value></property>
> <property><name>fs.s3.maxRetries</name><value>4</value></property>
>
> <property><name>dfs.datanode.dns.interface</name><value>default</value></property>
>
> <property><name>mapred.cluster.max.map.memory.mb</name><value>-1</value></property>
> <property><name>dfs.support.append</name><value>false</value></property>
>
> <property><name>mapreduce.reduce.shuffle.maxfetchfailures</name><value>10</value></property>
> <property><name>mapreduce.job.acl-modify-job</name><value>
> </value></property>
>
> <property><name>dfs.permissions.supergroup</name><value>supergroup</value></property>
>
> <property><name>mapred.local.dir</name><value>${hadoop.tmp.dir}/mapred/local</value></property>
>
> <property><name>fs.hftp.impl</name><value>org.apache.hadoop.hdfs.HftpFileSystem</value></property>
> <property><name>fs.trash.interval</name><value>0</value></property>
> <property><name>fs.s3.sleepTimeSeconds</name><value>10</value></property>
> <property><name>dfs.replication.min</name><value>1</value></property>
>
> <property><name>mapred.submit.replication</name><value>10</value></property>
>
> <property><name>fs.har.impl</name><value>org.apache.hadoop.fs.HarFileSystem</value></property>
>
> <property><name>mapred.map.output.compression.codec</name><value>org.apache.hadoop.io.compress.DefaultCodec</value></property>
>
> <property><name>mapred.tasktracker.dns.interface</name><value>default</value></property>
>
> <property><name>dfs.namenode.decommission.interval</name><value>30</value></property>
> <property><name>dfs.http.address</name><value>0.0.0.0:50070
> </value></property>
> <property><name>dfs.heartbeat.interval</name><value>3</value></property>
>
> <property><name>mapred.job.tracker</name><value>localhost:54311</value></property>
> <property><name>mapreduce.job.submithost</name><value>beb-1.bgu.ac.il
> </value></property>
>
> <property><name>io.seqfile.sorter.recordlimit</name><value>1000000</value></property>
>
> <property><name>giraph.vertexInputFormatClass</name><value>org.apache.giraph.io.JsonLongDoubleFloatDoubleVertexInputFormat</value></property>
>
> <property><name>dfs.name.dir</name><value>${hadoop.tmp.dir}/dfs/name</value></property>
>
> <property><name>mapred.line.input.format.linespermap</name><value>1</value></property>
>
> <property><name>mapred.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value></property>
> <property><name>dfs.datanode.http.address</name><value>0.0.0.0:50075
> </value></property>
>
> <property><name>fs.webhdfs.impl</name><value>org.apache.hadoop.hdfs.web.WebHdfsFileSystem</value></property>
>
> <property><name>mapred.local.dir.minspacekill</name><value>0</value></property>
> <property><name>dfs.replication.interval</name><value>3</value></property>
> <property><name>io.sort.record.percent</name><value>0.05</value></property>
>
> <property><name>fs.kfs.impl</name><value>org.apache.hadoop.fs.kfs.KosmosFileSystem</value></property>
>
> <property><name>mapred.temp.dir</name><value>${hadoop.tmp.dir}/mapred/temp</value></property>
>
> <property><name>mapred.tasktracker.reduce.tasks.maximum</name><value>2</value></property>
>
> <property><name>mapreduce.job.user.classpath.first</name><value>true</value></property>
> <property><name>dfs.replication</name><value>1</value></property>
>
> <property><name>fs.checkpoint.edits.dir</name><value>${fs.checkpoint.dir}</value></property>
>
> <property><name>mapred.tasktracker.tasks.sleeptime-before-sigkill</name><value>5000</value></property>
>
> <property><name>mapred.job.reduce.input.buffer.percent</name><value>0.0</value></property>
>
> <property><name>mapred.tasktracker.indexcache.mb</name><value>10</value></property>
>
> <property><name>mapreduce.job.split.metainfo.maxsize</name><value>10000000</value></property>
> <property><name>hadoop.logfile.count</name><value>10</value></property>
>
> <property><name>mapred.skip.reduce.auto.incr.proc.count</name><value>true</value></property>
>
> <property><name>mapreduce.job.submithostaddress</name><value>127.0.1.1</value></property>
>
> <property><name>io.seqfile.compress.blocksize</name><value>1000000</value></property>
> <property><name>fs.s3.block.size</name><value>67108864</value></property>
>
> <property><name>mapred.tasktracker.taskmemorymanager.monitoring-interval</name><value>5000</value></property>
>
> <property><name>giraph.minPercentResponded</name><value>100.0</value></property>
>
> <property><name>mapred.queue.default.state</name><value>RUNNING</value></property>
> <property><name>mapred.acls.enabled</name><value>false</value></property>
>
> <property><name>mapreduce.jobtracker.staging.root.dir</name><value>${hadoop.tmp.dir}/mapred/staging</value></property>
> <property><name>mapred.queue.names</name><value>default</value></property>
>
> <property><name>dfs.access.time.precision</name><value>3600000</value></property>
>
> <property><name>fs.hsftp.impl</name><value>org.apache.hadoop.hdfs.HsftpFileSystem</value></property>
> <property><name>mapred.task.tracker.http.address</name><value>
> 0.0.0.0:50060</value></property>
>
> <property><name>mapred.reduce.parallel.copies</name><value>5</value></property>
>
> <property><name>io.seqfile.lazydecompress</name><value>true</value></property>
>
> <property><name>mapred.output.dir</name><value>shortestPathsOutputGraph20</value></property>
> <property><name>io.sort.mb</name><value>100</value></property>
>
> <property><name>ipc.client.connection.maxidletime</name><value>10000</value></property>
>
> <property><name>mapred.compress.map.output</name><value>false</value></property>
>
> <property><name>hadoop.security.uid.cache.secs</name><value>14400</value></property>
> <property><name>mapred.task.tracker.report.address</name><value>
> 127.0.0.1:0</value></property>
>
> <property><name>mapred.healthChecker.interval</name><value>60000</value></property>
> <property><name>ipc.client.kill.max</name><value>10</value></property>
>
> <property><name>ipc.client.connect.max.retries</name><value>10</value></property>
> <property><name>ipc.ping.interval</name><value>300000</value></property>
>
> <property><name>mapreduce.user.classpath.first</name><value>true</value></property>
>
> <property><name>mapreduce.map.class</name><value>org.apache.giraph.graph.GraphMapper</value></property>
>
> <property><name>fs.s3.impl</name><value>org.apache.hadoop.fs.s3.S3FileSystem</value></property>
>
> <property><name>mapred.user.jobconf.limit</name><value>5242880</value></property>
>
> <property><name>mapred.input.dir</name><value>hdfs://localhost:54310/user/hduser/shortestPathsInputGraph</value></property>
> <property><name>mapred.job.tracker.http.address</name><value>0.0.0.0:50030
> </value></property>
> <property><name>io.file.buffer.size</name><value>4096</value></property>
>
> <property><name>mapred.jobtracker.restart.recover</name><value>false</value></property>
>
> <property><name>io.serializations</name><value>org.apache.hadoop.io.serializer.WritableSerialization</value></property>
>
> <property><name>dfs.datanode.handler.count</name><value>3</value></property>
> <property><name>mapred.task.profile</name><value>false</value></property>
>
> <property><name>dfs.replication.considerLoad</name><value>true</value></property>
>
> <property><name>jobclient.output.filter</name><value>FAILED</value></property>
>
> <property><name>dfs.namenode.delegation.token.max-lifetime</name><value>604800000</value></property>
>
> <property><name>mapred.tasktracker.map.tasks.maximum</name><value>4</value></property>
>
> <property><name>io.compression.codecs</name><value>org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value></property>
> <property><name>fs.checkpoint.size</name><value>67108864</value></property>
> </configuration>
>