You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-dev@hadoop.apache.org by Apache Hudson Server <hu...@hudson.apache.org> on 2011/01/07 02:17:36 UTC

Hadoop-Mapreduce-22-branch - Build # 10 - Still Failing

See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/10/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 204030 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-07 01:17:10,402 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,402 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,403 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,403 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,403 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,404 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,404 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,404 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,405 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,405 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,406 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,406 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,406 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,407 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,407 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,407 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,408 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,408 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,408 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-07 01:17:10,409 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.956 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.356 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.292 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:809: Tests failed!

Total time: 159 minutes 20 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  <init>.org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken

Error Message:
null

Stack Trace:
java.lang.ExceptionInInitializerError
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:219)
	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:276)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.<clinit>(TestUmbilicalProtocolWithJobToken.java:63)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:169)
Caused by: java.lang.IllegalArgumentException: Can't get Kerberos configuration
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:89)
Caused by: KrbException: Could not load configuration file /etc/krb5.conf (No such file or directory)
	at sun.security.krb5.Config.<init>(Config.java:147)
	at sun.security.krb5.Config.getInstance(Config.java:79)
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:85)
Caused by: java.io.FileNotFoundException: /etc/krb5.conf (No such file or directory)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.security.krb5.Config$1.run(Config.java:539)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.Config.loadConfigFile(Config.java:535)
	at sun.security.krb5.Config.<init>(Config.java:144)




Hadoop-Mapreduce-22-branch - Build # 19 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/19/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 223 lines...]
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/output/TestMRMultipleOutputs.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/chain
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/chain/TestChainErrors.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/chain/TestMapReduceChain.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/chain/TestSingleElementChain.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/db
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/db/TestDBOutputFormat.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/db/TestDBJob.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/db/TestDataDrivenDBInputFormat.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/db/TestIntegerSplitter.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/db/TestTextSplitter.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/aggregate
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/aggregate/TestMapReduceAggregates.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/aggregate/AggregatorTests.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/fieldsel
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/fieldsel/TestMRFieldSelection.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition/TestTotalOrderPartitioner.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition/TestMRKeyFieldBasedComparator.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition/TestInputSampler.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition/TestBinaryPartitioner.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition/TestMRKeyFieldBasedPartitioner.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/partition/TestKeyFieldHelper.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/jobcontrol
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/jobcontrol/TestMapReduceJobControl.java
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/map
A         src/test/mapred/org/apache/hadoop/mapreduce/lib/map/TestMultithreadedMapper.java
A         src/test/mapred/org/apache/hadoop/mapreduce/security
A         src/test/mapred/org/apache/hadoop/mapreduce/security/token
A         src/test/mapred/org/apache/hadoop/mapreduce/security/token/TestDelegationTokenRenewal.java
A         src/test/mapred/org/apache/hadoop/mapreduce/security/token/delegation
A         src/test/mapred/org/apache/hadoop/mapreduce/security/token/delegation/TestDelegationToken.java
A         src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCache.java
A         src/test/mapred/org/apache/hadoop/mapreduce/security/TestBinaryTokenFile.java
A         src/test/mapred/org/apache/hadoop/mapreduce/security/TestUmbilicalProtocolWithJobToken.java
A         src/test/mapred/org/apache/hadoop/mapreduce/security/TestTokenCacheOldApi.java
A         src/test/mapred/org/apache/hadoop/mapreduce/TestMapCollection.java
A         src/test/mapred/org/apache/hadoop/mapreduce/util
A         src/test/mapred/org/apache/hadoop/mapreduce/util/TestLinuxResourceCalculatorPlugin.java
A         src/test/mapred/org/apache/hadoop/mapreduce/util/TestMRAsyncDiskService.java
A         src/test/mapred/org/apache/hadoop/mapreduce/util/TestProcfsBasedProcessTree.java
A         src/test/mapred/org/apache/hadoop/cli
A         src/test/mapred/org/apache/hadoop/cli/testMRConf.xml
A         src/test/mapred/org/apache/hadoop/cli/TestMRCLI.java
A         src/test/mapred/org/apache/hadoop/cli/data60bytes
A         src/test/mapred/org/apache/hadoop/io
AU        src/test/mapred/org/apache/hadoop/io/FileBench.java
AU        src/test/mapred/org/apache/hadoop/io/TestSequenceFileMergeProgress.java
A         src/test/mapred/org/apache/hadoop/security
SCM check out aborted
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.

Hadoop-Mapreduce-22-branch - Build # 18 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/18/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 51456 lines...]
    [junit] 		BAD_ID=0
    [junit] 		CONNECTION=0
    [junit] 		IO_ERROR=0
    [junit] 		WRONG_LENGTH=0
    [junit] 		WRONG_MAP=0
    [junit] 		WRONG_REDUCE=0
    [junit] 	Job Counters 
    [junit] 		Total time spent by all maps waiting after reserving slots (ms)=0
    [junit] 		Total time spent by all reduces waiting after reserving slots (ms)=0
    [junit] 		Rack-local map tasks=1
    [junit] 		SLOTS_MILLIS_MAPS=3792
    [junit] 		SLOTS_MILLIS_REDUCES=3700
    [junit] 		Launched map tasks=1
    [junit] 		Launched reduce tasks=1
    [junit] 	Map-Reduce Framework
    [junit] 		Combine input records=0
    [junit] 		Combine output records=0
    [junit] 		CPU_MILLISECONDS=1500
    [junit] 		Failed Shuffles=0
    [junit] 		GC time elapsed (ms)=86
    [junit] 		Map input records=3
    [junit] 		Map output bytes=71
    [junit] 		Map output records=3
    [junit] 		Merged Map outputs=1
    [junit] 		PHYSICAL_MEMORY_BYTES=112680960
    [junit] 		Reduce input groups=3
    [junit] 		Reduce input records=3
    [junit] 		Reduce output records=3
    [junit] 		Reduce shuffle bytes=83
    [junit] 		Shuffled Maps =1
    [junit] 		Spilled Records=6
    [junit] 		SPLIT_RAW_BYTES=172
    [junit] 		VIRTUAL_MEMORY_BYTES=846856192
    [junit] 2011-01-28 11:00:29,202 INFO  util.AsyncDiskService (AsyncDiskService.java:shutdown(111)) - Shutting down all AsyncDiskService threads...
    [junit] 2011-01-28 11:00:29,202 INFO  util.AsyncDiskService (AsyncDiskService.java:awaitTermination(140)) - All AsyncDiskService threads are terminated.
    [junit] 2011-01-28 11:00:29,203 INFO  mapred.TaskTracker (TaskTracker.java:run(868)) - Shutting down: Map-events fetcher for all reduce tasks on tracker_host0.foo.com:localhost/127.0.0.1:59816
    [junit] 2011-01-28 11:00:29,205 ERROR filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:run(946)) - Exception in DistributedCache CleanupThread.
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.mapreduce.filecache.TrackerDistributedCacheManager$CleanupThread.run(TrackerDistributedCacheManager.java:943)
    [junit] 2011-01-28 11:00:31,930 INFO  mapred.JvmManager (JvmManager.java:runChild(484)) - JVM : jvm_20110128105949896_0002_m_676208490 exited with exit code 0. Number of tasks it ran: 1
    [junit] 2011-01-28 11:00:31,930 INFO  ipc.Server (Server.java:stop(1600)) - Stopping server on 59816
    [junit] 2011-01-28 11:00:31,931 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 59816: exiting
    [junit] 2011-01-28 11:00:31,931 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 59816: exiting
    [junit] 2011-01-28 11:00:31,931 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 59816: exiting
    [junit] 2011-01-28 11:00:31,931 INFO  mapred.TaskTracker (TaskTracker.java:shutdown(1263)) - Shutting down StatusHttpServer
    [junit] 2011-01-28 11:00:31,931 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 59816
    [junit] 2011-01-28 11:00:31,931 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-01-28 11:00:31,931 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 59816: exiting
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  TEST-org.apache.hadoop.mapreduce.TestChild.xml.<init>

Error Message:


Stack Trace:
Test report file /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/TEST-org.apache.hadoop.mapreduce.TestChild.xml was length 0



Hadoop-Mapreduce-22-branch - Build # 17 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/17/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 50513 lines...]
    [junit] 		BAD_ID=0
    [junit] 		CONNECTION=0
    [junit] 		IO_ERROR=0
    [junit] 		WRONG_LENGTH=0
    [junit] 		WRONG_MAP=0
    [junit] 		WRONG_REDUCE=0
    [junit] 	Job Counters 
    [junit] 		Total time spent by all maps waiting after reserving slots (ms)=0
    [junit] 		Total time spent by all reduces waiting after reserving slots (ms)=0
    [junit] 		Rack-local map tasks=1
    [junit] 		SLOTS_MILLIS_MAPS=3805
    [junit] 		SLOTS_MILLIS_REDUCES=3662
    [junit] 		Launched map tasks=1
    [junit] 		Launched reduce tasks=1
    [junit] 	Map-Reduce Framework
    [junit] 		Combine input records=0
    [junit] 		Combine output records=0
    [junit] 		CPU_MILLISECONDS=1530
    [junit] 		Failed Shuffles=0
    [junit] 		GC time elapsed (ms)=88
    [junit] 		Map input records=3
    [junit] 		Map output bytes=71
    [junit] 		Map output records=3
    [junit] 		Merged Map outputs=1
    [junit] 		PHYSICAL_MEMORY_BYTES=110231552
    [junit] 		Reduce input groups=3
    [junit] 		Reduce input records=3
    [junit] 		Reduce output records=3
    [junit] 		Reduce shuffle bytes=83
    [junit] 		Shuffled Maps =1
    [junit] 		Spilled Records=6
    [junit] 		SPLIT_RAW_BYTES=172
    [junit] 		VIRTUAL_MEMORY_BYTES=845045760
    [junit] 2011-01-26 23:00:44,412 INFO  util.AsyncDiskService (AsyncDiskService.java:shutdown(111)) - Shutting down all AsyncDiskService threads...
    [junit] 2011-01-26 23:00:44,413 INFO  util.AsyncDiskService (AsyncDiskService.java:awaitTermination(140)) - All AsyncDiskService threads are terminated.
    [junit] 2011-01-26 23:00:44,414 INFO  mapred.TaskTracker (TaskTracker.java:run(868)) - Shutting down: Map-events fetcher for all reduce tasks on tracker_host0.foo.com:localhost/127.0.0.1:38910
    [junit] 2011-01-26 23:00:44,415 ERROR filecache.TrackerDistributedCacheManager (TrackerDistributedCacheManager.java:run(946)) - Exception in DistributedCache CleanupThread.
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.mapreduce.filecache.TrackerDistributedCacheManager$CleanupThread.run(TrackerDistributedCacheManager.java:943)
    [junit] 2011-01-26 23:00:47,017 INFO  mapred.JvmManager (JvmManager.java:runChild(484)) - JVM : jvm_20110126230006126_0002_m_1909000990 exited with exit code 0. Number of tasks it ran: 1
    [junit] 2011-01-26 23:00:47,017 INFO  ipc.Server (Server.java:stop(1600)) - Stopping server on 38910
    [junit] 2011-01-26 23:00:47,018 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 1 on 38910: exiting
    [junit] 2011-01-26 23:00:47,018 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 3 on 38910: exiting
    [junit] 2011-01-26 23:00:47,018 INFO  ipc.Server (Server.java:run(675)) - Stopping IPC Server Responder
    [junit] 2011-01-26 23:00:47,019 INFO  mapred.TaskTracker (TaskTracker.java:shutdown(1263)) - Shutting down StatusHttpServer
    [junit] 2011-01-26 23:00:47,018 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 0 on 38910: exiting
    [junit] 2011-01-26 23:00:47,019 INFO  ipc.Server (Server.java:run(475)) - Stopping IPC Server listener on 38910
    [junit] 2011-01-26 23:00:47,018 INFO  ipc.Server (Server.java:run(1443)) - IPC Server handler 2 on 38910: exiting
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  TEST-org.apache.hadoop.mapreduce.TestChild.xml.<init>

Error Message:


Stack Trace:
Test report file /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/TEST-org.apache.hadoop.mapreduce.TestChild.xml was length 0



Hadoop-Mapreduce-22-branch - Build # 16 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/16/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 49791 lines...]
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,175 INFO  mapred.MapTask (MapTask.java:<init>(819)) - mapreduce.task.io.sort.mb: 10
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,176 INFO  mapred.MapTask (MapTask.java:<init>(820)) - soft limit at 8388608
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,176 INFO  mapred.MapTask (MapTask.java:<init>(821)) - bufstart = 0; bufvoid = 10485760
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,176 INFO  mapred.MapTask (MapTask.java:<init>(822)) - kvstart = 2621436; length = 655360
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,363 INFO  mapred.MapTask (MapTask.java:flush(1283)) - Starting flush of map output
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,363 INFO  mapred.MapTask (MapTask.java:flush(1302)) - Spilling map output
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,364 INFO  mapred.MapTask (MapTask.java:flush(1303)) - bufstart = 0; bufend = 4216468; bufvoid = 10485760
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,364 INFO  mapred.MapTask (MapTask.java:flush(1305)) - kvstart = 2621436(10485744); kvend = 2605048(10420192); length = 16389/655360
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,541 INFO  mapred.MapTask (MapTask.java:sortAndSpill(1489)) - Finished spill 0
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,545 INFO  mapred.Task (Task.java:done(848)) - Task:attempt_20110126105948982_0003_m_000002_0 is done. And is in the process of commiting
    [junit] attempt_20110126105948982_0003_m_000002_0: 2011-01-26 11:01:01,611 INFO  mapred.Task (Task.java:sendDone(968)) - Task 'attempt_20110126105948982_0003_m_000002_0' done.
    [junit] 2011-01-26 11:01:02,167 INFO  mapreduce.Job (Job.java:printTaskEvents(1200)) - Task Id : attempt_20110126105948982_0003_m_000004_0, Status : SUCCEEDED
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:00,541 WARN  util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:00,771 INFO  jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=MAP, sessionId=
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:00,772 WARN  conf.Configuration (Configuration.java:handleDeprecation(313)) - user.name is deprecated. Instead, use mapreduce.job.user.name
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:00,842 WARN  conf.Configuration (Configuration.java:handleDeprecation(313)) - mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:00,931 INFO  util.ProcessTree (ProcessTree.java:isSetsidSupported(65)) - setsid exited with exit code 0
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:00,992 INFO  mapred.Task (Task.java:initialize(523)) -  Using ResourceCalculatorPlugin : org.apache.hadoop.mapreduce.util.LinuxResourceCalculatorPlugin@3cc262
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,046 WARN  util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:constructProcessInfo(519)) - The process 7080 may have finished in the interim.
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,047 WARN  util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:constructProcessInfo(519)) - The process 7081 may have finished in the interim.
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,047 WARN  util.ProcfsBasedProcessTree (ProcfsBasedProcessTree.java:constructProcessInfo(519)) - The process 7082 may have finished in the interim.
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,147 INFO  mapred.MapTask (MapTask.java:runOldMapper(387)) - numReduceTasks: 1
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,167 INFO  mapred.MapTask (MapTask.java:setEquator(1021)) - (EQUATOR) 0 kvi 2621436(10485744)
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,167 INFO  mapred.MapTask (MapTask.java:<init>(819)) - mapreduce.task.io.sort.mb: 10
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,167 INFO  mapred.MapTask (MapTask.java:<init>(820)) - soft limit at 8388608
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,168 INFO  mapred.MapTask (MapTask.java:<init>(821)) - bufstart = 0; bufvoid = 10485760
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,168 INFO  mapred.MapTask (MapTask.java:<init>(822)) - kvstart = 2621436; length = 655360
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,357 INFO  mapred.MapTask (MapTask.java:flush(1283)) - Starting flush of map output
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,358 INFO  mapred.MapTask (MapTask.java:flush(1302)) - Spilling map output
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,358 INFO  mapred.MapTask (MapTask.java:flush(1303)) - bufstart = 0; bufend = 4214128; bufvoid = 10485760
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,358 INFO  mapred.MapTask (MapTask.java:flush(1305)) - kvstart = 2621436(10485744); kvend = 2605048(10420192); length = 16389/655360
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,539 INFO  mapred.MapTask (MapTask.java:sortAndSpill(1489)) - Finished spill 0
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,543 INFO  mapred.Task (Task.java:done(848)) - Task:attempt_20110126105948982_0003_m_000004_0 is done. And is in the process of commiting
    [junit] attempt_20110126105948982_0003_m_000004_0: 2011-01-26 11:01:01,595 INFO  mapred.Task (Task.java:sendDone(968)) - Task 'attempt_20110126105948982_0003_m_000004_0' done.
    [junit] 2011-01-26 11:01:02,959 INFO  mapred.TaskTracker (TaskTracker.java:reportProgress(2663)) - attempt_20110126105948982_0003_r_000000_0 0.095238104% reduce > copy(2 of 7 at 2.02 MB/s)
    [junit] 2011-01-26 11:01:03,141 INFO  mapred.TaskTracker (TaskTracker.java:sendMapFile(3833)) - Sent out 4231696 bytes to reduce 0 from map: attempt_20110126105948982_0003_m_000003_0 given 4231696/4231692
    [junit] 2011-01-26 11:01:03,142 INFO  mapred.TaskTracker (TaskTracker.java:doGet(3699)) - Shuffled 1maps (mapIds=attempt_20110126105948982_0003_m_000003_0) to reduce 0 in 66s
    [junit] 2011-01-26 11:01:03,142 INFO  TaskTracker.clienttrace (TaskTracker.java:doGet(3704)) - src: 127.0.0.1:53935, dest: 127.0.0.1:43738, maps: 1, op: MAPRED_SHUFFLE, reduceID: 0, duration: 66
    [junit] 2011-01-26 11:01:03,255 INFO  mapred.TaskTracker (TaskTracker.java:sendMapFile(3833)) - Sent out 4232866 bytes to reduce 0 from map: attempt_20110126105948982_0003_m_000002_0 given 4232866/4232862
    [junit] 2011-01-26 11:01:03,316 INFO  mapred.TaskTracker (TaskTracker.java:sendMapFile(3833)) - Sent out 4230526 bytes to reduce 0 from map: attempt_20110126105948982_0003_m_000004_0 given 4230526/4230522
    [junit] 2011-01-26 11:01:03,317 INFO  mapred.TaskTracker (TaskTracker.java:doGet(3699)) - Shuffled 2maps (mapIds=attempt_20110126105948982_0003_m_000002_0,attempt_20110126105948982_0003_m_000004_0) to reduce 0 in 169s
    [junit] 2011-01-26 11:01:03,317 INFO  TaskTracker.clienttrace (TaskTracker.java:doGet(3704)) - src: 127.0.0.1:34350, dest: 127.0.0.1:41439, maps: 2, op: MAPRED_SHUFFLE, reduceID: 0, duration: 169
    [junit] 2011-01-26 11:01:03,398 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1099)) -  map 71% reduce 0%
    [junit] 2011-01-26 11:01:03,671 INFO  mapred.JvmManager (JvmManager.java:reapJvm(377)) - Killing JVM: jvm_20110126105948982_0003_m_393059862
    [junit] 2011-01-26 11:01:03,671 INFO  mapred.JvmManager (JvmManager.java:<init>(459)) - In JvmRunner constructed JVM ID: jvm_20110126105948982_0003_m_8075769
    [junit] 2011-01-26 11:01:03,671 INFO  mapred.JvmManager (JvmManager.java:spawnNewJvm(423)) - JVM Runner jvm_20110126105948982_0003_m_8075769 spawned.
    [junit] 2011-01-26 11:01:03,671 INFO  mapred.JvmManager (JvmManager.java:runChild(484)) - JVM : jvm_20110126105948982_0003_m_1058292542 exited with exit code 0. Number of tasks it ran: 1
    [junit] 2011-01-26 11:01:03,672 INFO  mapred.JvmManager (JvmManager.java:runChild(484)) - JVM : jvm_20110126105948982_0003_m_393059862 exited with exit code 0. Number of tasks it ran: 1
    [junit] 2011-01-26 11:01:04,334 INFO  mapred.TaskTracker (TaskTracker.java:getTask(3247)) - JVM with ID: jvm_20110126105948982_0003_m_8075769 given task: attempt_20110126105948982_0003_m_000005_0
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
1 tests failed.
FAILED:  TEST-org.apache.hadoop.mapred.TestReduceFetch.xml.<init>

Error Message:


Stack Trace:
Test report file /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/TEST-org.apache.hadoop.mapred.TestReduceFetch.xml was length 0



Hadoop-Mapreduce-22-branch - Build # 15 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/15/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 42728 lines...]
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,500 INFO  reduce.Fetcher (Fetcher.java:copyFromHost(217)) - for url=42283/mapOutput?job=job_20110125105934639_0003&reduce=0&map=attempt_20110125105934639_0003_m_000008_0 sent hash and receievd reply
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,501 INFO  reduce.Fetcher (Fetcher.java:copyMapOutput(314)) - fetcher#1 about to shuffle output of map attempt_20110125105934639_0003_m_000008_0 decomp: 20 len: 24 to MEMORY
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,501 INFO  reduce.Fetcher (Fetcher.java:shuffleToMemory(479)) - Read 20 bytes from map-output for attempt_20110125105934639_0003_m_000008_0
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,501 INFO  reduce.MergeManager (MergeManager.java:closeInMemoryFile(277)) - closeInMemoryFile -> map-output of size: 20, inMemoryMapOutputs.size() -> 9
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,501 INFO  reduce.ShuffleScheduler (ShuffleScheduler.java:freeHost(345)) - localhost:42283 freed by fetcher#1 in 79s
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,508 INFO  reduce.Fetcher (Fetcher.java:copyFromHost(217)) - for url=44028/mapOutput?job=job_20110125105934639_0003&reduce=0&map=attempt_20110125105934639_0003_m_000010_0 sent hash and receievd reply
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,508 INFO  reduce.Fetcher (Fetcher.java:copyMapOutput(314)) - fetcher#5 about to shuffle output of map attempt_20110125105934639_0003_m_000010_0 decomp: 20 len: 24 to MEMORY
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,508 INFO  reduce.Fetcher (Fetcher.java:shuffleToMemory(479)) - Read 20 bytes from map-output for attempt_20110125105934639_0003_m_000010_0
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,509 INFO  reduce.MergeManager (MergeManager.java:closeInMemoryFile(277)) - closeInMemoryFile -> map-output of size: 20, inMemoryMapOutputs.size() -> 10
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:55,509 INFO  reduce.ShuffleScheduler (ShuffleScheduler.java:freeHost(345)) - localhost:44028 freed by fetcher#5 in 87s
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,425 INFO  reduce.EventFetcher (EventFetcher.java:run(69)) - attempt_20110125105934639_0003_r_000000_0: Got 1 new map-outputs
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,426 INFO  reduce.ShuffleScheduler (ShuffleScheduler.java:getHost(303)) - Assiging localhost:42283 with 1 to fetcher#5
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,426 INFO  reduce.ShuffleScheduler (ShuffleScheduler.java:getMapsForHost(333)) - assigned 1 of 1 to localhost:42283 to fetcher#5
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,475 INFO  reduce.Fetcher (Fetcher.java:copyFromHost(217)) - for url=42283/mapOutput?job=job_20110125105934639_0003&reduce=0&map=attempt_20110125105934639_0003_m_000009_0 sent hash and receievd reply
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,476 INFO  reduce.Fetcher (Fetcher.java:copyMapOutput(314)) - fetcher#5 about to shuffle output of map attempt_20110125105934639_0003_m_000009_0 decomp: 20 len: 24 to MEMORY
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,476 INFO  reduce.Fetcher (Fetcher.java:shuffleToMemory(479)) - Read 20 bytes from map-output for attempt_20110125105934639_0003_m_000009_0
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,476 INFO  reduce.MergeManager (MergeManager.java:closeInMemoryFile(277)) - closeInMemoryFile -> map-output of size: 20, inMemoryMapOutputs.size() -> 11
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,477 INFO  reduce.ShuffleScheduler (ShuffleScheduler.java:freeHost(345)) - localhost:42283 freed by fetcher#5 in 51s
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,481 INFO  reduce.MergeManager (MergeManager.java:finalMerge(629)) - finalMerge called with 11 in-memory map-outputs and 0 on-disk map-outputs
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,505 INFO  mapred.Merger (Merger.java:merge(549)) - Merging 11 sorted segments
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,505 INFO  mapred.Merger (Merger.java:merge(648)) - Down to the last merge-pass, with 11 segments left of total size: 154 bytes
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,516 INFO  reduce.MergeManager (MergeManager.java:finalMerge(701)) - Merged 11 segments, 220 bytes to disk to satisfy reduce memory limit
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,517 INFO  reduce.MergeManager (MergeManager.java:finalMerge(727)) - Merging 1 files, 204 bytes from disk
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,517 INFO  reduce.MergeManager (MergeManager.java:finalMerge(742)) - Merging 0 segments, 0 bytes from memory into reduce
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,518 INFO  mapred.Merger (Merger.java:merge(549)) - Merging 1 sorted segments
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,521 INFO  mapred.Merger (Merger.java:merge(648)) - Down to the last merge-pass, with 1 segments left of total size: 194 bytes
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:00:58,693 INFO  mapred.Task (Task.java:done(848)) - Task:attempt_20110125105934639_0003_r_000000_0 is done. And is in the process of commiting
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:01:00,729 INFO  mapred.Task (Task.java:commit(1009)) - Task attempt_20110125105934639_0003_r_000000_0 is allowed to commit now
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:01:00,742 INFO  mapred.FileOutputCommitter (FileOutputCommitter.java:commitTask(139)) - Saved output of task 'attempt_20110125105934639_0003_r_000000_0' to hdfs://localhost:57190/tmp/sortvalidate/recordstatschecker
    [junit] attempt_20110125105934639_0003_r_000000_0: 2011-01-25 11:01:00,780 INFO  mapred.Task (Task.java:sendDone(968)) - Task 'attempt_20110125105934639_0003_r_000000_0' done.
    [junit] 2011-01-25 11:01:04,124 INFO  mapred.TaskTracker (TaskTracker.java:reportProgress(2663)) - attempt_20110125105934639_0003_m_000011_0 0.0% 
    [junit] 2011-01-25 11:01:04,130 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=hudson	ip=/127.0.0.1	cmd=delete	src=/tmp/sortvalidate/recordstatschecker/_temporary	dst=null	perm=null
    [junit] 2011-01-25 11:01:04,137 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=hudson	ip=/127.0.0.1	cmd=create	src=/tmp/sortvalidate/recordstatschecker/_SUCCESS	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 2011-01-25 11:01:04,145 INFO  hdfs.StateChange (FSNamesystem.java:completeFileInternal(1713)) - DIR* NameSystem.completeFile: file /tmp/sortvalidate/recordstatschecker/_SUCCESS is closed by DFSClient_attempt_20110125105934639_0003_m_000011_0
    [junit] 2011-01-25 11:01:04,148 INFO  hdfs.StateChange (BlockManager.java:addToInvalidates(559)) - BLOCK* NameSystem.addToInvalidates: blk_2451946550361572757_1025 to 127.0.0.1:55963 127.0.0.1:38914 127.0.0.1:51592 
    [junit] 2011-01-25 11:01:04,148 INFO  hdfs.StateChange (BlockManager.java:addToInvalidates(559)) - BLOCK* NameSystem.addToInvalidates: blk_-6508113479149908994_1026 to 127.0.0.1:38914 127.0.0.1:51592 127.0.0.1:55963 
    [junit] 2011-01-25 11:01:04,148 INFO  hdfs.StateChange (BlockManager.java:addToInvalidates(559)) - BLOCK* NameSystem.addToInvalidates: blk_-300519826661286597_1027 to 127.0.0.1:55963 127.0.0.1:51592 127.0.0.1:38914 
    [junit] 2011-01-25 11:01:04,149 INFO  FSNamesystem.audit (FSNamesystem.java:logAuditEvent(148)) - ugi=hudson	ip=/127.0.0.1	cmd=delete	src=/tmp/hadoop-hudson/mapred/staging/hudson/.staging/job_20110125105934639_0003	dst=null	perm=null
    [junit] 2011-01-25 11:01:04,217 INFO  mapred.TaskTracker (TaskTracker.java:reportProgress(2663)) - attempt_20110125105934639_0003_m_000011_0 0.0% cleanup > map
    [junit] 2011-01-25 11:01:04,218 INFO  mapred.TaskTracker (TaskTracker.java:reportDone(2744)) - Task attempt_20110125105934639_0003_m_000011_0 is done.
    [junit] 2011-01-25 11:01:04,218 INFO  mapred.TaskTracker (TaskTracker.java:reportDone(2745)) - reported output size for attempt_20110125105934639_0003_m_000011_0  was -1
    [junit] 2011-01-25 11:01:04,219 INFO  mapred.TaskTracker (TaskTracker.java:addFreeSlots(2230)) - addFreeSlot : current free slots : 2
    [junit] 2011-01-25 11:01:04,449 WARN  util.ProcessTree (ProcessTree.java:sendSignal(134)) - Error executing shell command org.apache.hadoop.util.Shell$ExitCodeException: kill: No such process
    [junit] 
    [junit] 2011-01-25 11:01:04,449 INFO  util.ProcessTree (ProcessTree.java:sendSignal(137)) - Sending signal to all members of process group -19718: SIGTERM. Exit code 1
    [junit] 2011-01-25 11:01:04,868 INFO  mapreduce.Job (Job.java:monitorAndPrintJob(1099)) -  map 100% reduce 100%
Build timed out. Aborting
    [junit] Running org.apache.hadoop.mapred.TestMiniMRDFSSort
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.hadoop.mapred.TestMiniMRDFSSort FAILED (crashed)
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.mapred.TestMiniMRDFSSort.testMapReduceSort

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please note the time in the report does not reflect the time until the VM exit.


FAILED:  TEST-org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.xml.<init>

Error Message:


Stack Trace:
Test report file /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/TEST-org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers.xml was length 0



Hadoop-Mapreduce-22-branch - Build # 14 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/14/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 203453 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-23 01:22:12,348 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,348 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,349 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,349 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,349 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,350 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,350 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,350 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,351 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,351 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,351 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,352 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,352 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,353 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,353 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,353 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,354 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,354 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,354 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-23 01:22:12,355 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.927 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.324 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.3 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:815: Tests failed!

Total time: 164 minutes 6 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
REGRESSION:  org.apache.hadoop.mapred.TestSetupAndCleanupFailure.testWithDFS

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  <init>.org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken

Error Message:
null

Stack Trace:
java.lang.ExceptionInInitializerError
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:219)
	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:276)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.<clinit>(TestUmbilicalProtocolWithJobToken.java:63)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:169)
Caused by: java.lang.IllegalArgumentException: Can't get Kerberos configuration
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:89)
Caused by: KrbException: Could not load configuration file /etc/krb5.conf (No such file or directory)
	at sun.security.krb5.Config.<init>(Config.java:147)
	at sun.security.krb5.Config.getInstance(Config.java:79)
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:85)
Caused by: java.io.FileNotFoundException: /etc/krb5.conf (No such file or directory)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.security.krb5.Config$1.run(Config.java:539)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.Config.loadConfigFile(Config.java:535)
	at sun.security.krb5.Config.<init>(Config.java:144)




Hadoop-Mapreduce-22-branch - Build # 13 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/13/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 203553 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-20 13:11:44,322 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,323 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,323 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,324 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,324 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,324 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,325 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,325 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,326 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,326 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,326 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,327 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,327 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,327 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,328 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,328 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,329 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,329 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,329 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-20 13:11:44,330 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.953 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.362 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.325 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:815: Tests failed!

Total time: 154 minutes 58 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
2 tests failed.
FAILED:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  <init>.org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken

Error Message:
null

Stack Trace:
java.lang.ExceptionInInitializerError
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:219)
	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:276)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.<clinit>(TestUmbilicalProtocolWithJobToken.java:63)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:169)
Caused by: java.lang.IllegalArgumentException: Can't get Kerberos configuration
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:89)
Caused by: KrbException: Could not load configuration file /etc/krb5.conf (No such file or directory)
	at sun.security.krb5.Config.<init>(Config.java:147)
	at sun.security.krb5.Config.getInstance(Config.java:79)
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:85)
Caused by: java.io.FileNotFoundException: /etc/krb5.conf (No such file or directory)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.security.krb5.Config$1.run(Config.java:539)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.Config.loadConfigFile(Config.java:535)
	at sun.security.krb5.Config.<init>(Config.java:144)




Hadoop-Mapreduce-22-branch - Build # 12 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/12/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 203922 lines...]
    [junit]    0.85:96549
    [junit]    0.9:96658
    [junit]    0.95:96670
    [junit] Failed Reduce CDF --------
    [junit] 0: -9223372036854775808--9223372036854775807
    [junit] map attempts to success -- 0.6567164179104478, 0.3283582089552239, 0.014925373134328358, 
    [junit] ===============
    [junit] 2011-01-12 01:33:33,876 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000025 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,877 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000028 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,877 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000029 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,878 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000030 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,878 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000031 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,878 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000032 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,879 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000033 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,879 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000034 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,879 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000035 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,880 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000036 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,880 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000037 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,880 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000038 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,881 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000039 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,881 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000040 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,881 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000041 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,882 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000042 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,882 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000043 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,882 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000044 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,883 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000045 has nulll TaskStatus
    [junit] 2011-01-12 01:33:33,883 WARN  rumen.ZombieJob (ZombieJob.java:sanitizeLoggedTask(318)) - Task task_200904211745_0004_r_000046 has nulll TaskStatus
    [junit] generated failed map runtime distribution
    [junit] 100000: 18592--18592
    [junit]    0.1:18592
    [junit]    0.5:18592
    [junit]    0.9:18592
    [junit] Tests run: 5, Failures: 0, Errors: 0, Time elapsed: 2.972 sec
    [junit] Running org.apache.hadoop.util.TestReflectionUtils
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.328 sec
    [junit] Running org.apache.hadoop.util.TestRunJar
    [junit] Creating file/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/data/out
    [junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 0.339 sec

checkfailure:
    [touch] Creating /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build/test/testsfailed

run-test-mapred-all-withtestcaseonly:

run-test-mapred:

BUILD FAILED
/grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/build.xml:815: Tests failed!

Total time: 175 minutes 27 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
3 tests failed.
REGRESSION:  org.apache.hadoop.mapreduce.TestLocalRunner.testMultiMaps

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  org.apache.hadoop.mapred.TestControlledMapReduceJob.testControlledMapReduceJob

Error Message:
Timeout occurred. Please note the time in the report does not reflect the time until the timeout.

Stack Trace:
junit.framework.AssertionFailedError: Timeout occurred. Please note the time in the report does not reflect the time until the timeout.


FAILED:  <init>.org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken

Error Message:
null

Stack Trace:
java.lang.ExceptionInInitializerError
	at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:219)
	at org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:276)
	at org.apache.hadoop.mapreduce.security.TestUmbilicalProtocolWithJobToken.<clinit>(TestUmbilicalProtocolWithJobToken.java:63)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:169)
Caused by: java.lang.IllegalArgumentException: Can't get Kerberos configuration
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:89)
Caused by: KrbException: Could not load configuration file /etc/krb5.conf (No such file or directory)
	at sun.security.krb5.Config.<init>(Config.java:147)
	at sun.security.krb5.Config.getInstance(Config.java:79)
	at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:85)
Caused by: java.io.FileNotFoundException: /etc/krb5.conf (No such file or directory)
	at java.io.FileInputStream.open(Native Method)
	at java.io.FileInputStream.<init>(FileInputStream.java:106)
	at java.io.FileInputStream.<init>(FileInputStream.java:66)
	at sun.security.krb5.Config$1.run(Config.java:539)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.security.krb5.Config.loadConfigFile(Config.java:535)
	at sun.security.krb5.Config.<init>(Config.java:144)




Hadoop-Mapreduce-22-branch - Build # 11 - Still Failing

Posted by Apache Hudson Server <hu...@hudson.apache.org>.
See https://hudson.apache.org/hudson/job/Hadoop-Mapreduce-22-branch/11/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE ###########################
[...truncated 2730 lines...]
     [exec] checking how to run the C++ preprocessor... g++ -E
     [exec] checking for objdir... .libs
     [exec] checking if gcc supports -fno-rtti -fno-exceptions... no
     [exec] checking for gcc option to produce PIC... -fPIC -DPIC
     [exec] checking if gcc PIC flag -fPIC -DPIC works... yes
     [exec] checking if gcc static flag -static works... yes
     [exec] checking if gcc supports -c -o file.o... yes
     [exec] checking if gcc supports -c -o file.o... (cached) yes
     [exec] checking whether the gcc linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
     [exec] checking whether -lc should be explicitly linked in... no
     [exec] checking dynamic linker characteristics... GNU/Linux ld.so
     [exec] checking how to hardcode library paths into programs... immediate
     [exec] checking whether stripping libraries is possible... yes
     [exec] checking if libtool supports shared libraries... yes
     [exec] checking whether to build shared libraries... yes
     [exec] checking whether to build static libraries... yes
     [exec] checking for ld used by g++... /usr/bin/ld -m elf_x86_64
     [exec] checking if the linker (/usr/bin/ld -m elf_x86_64) is GNU ld... yes
     [exec] checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
     [exec] checking for g++ option to produce PIC... -fPIC -DPIC
     [exec] checking if g++ PIC flag -fPIC -DPIC works... yes
     [exec] checking if g++ static flag -static works... yes
     [exec] checking if g++ supports -c -o file.o... yes
     [exec] checking if g++ supports -c -o file.o... (cached) yes
     [exec] checking whether the g++ linker (/usr/bin/ld -m elf_x86_64) supports shared libraries... yes
     [exec] checking dynamic linker characteristics... GNU/Linux ld.so
     [exec] checking how to hardcode library paths into programs... immediate
     [exec] checking for unistd.h... (cached) yes
     [exec] checking for stdbool.h that conforms to C99... yes
     [exec] checking for _Bool... no
     [exec] checking for an ANSI C-conforming const... yes
     [exec] checking for off_t... yes
     [exec] checking for size_t... yes
     [exec] checking whether strerror_r is declared... yes
     [exec] checking for strerror_r... yes
     [exec] checking whether strerror_r returns char *... yes
     [exec] checking for mkdir... yes
     [exec] checking for uname... yes
     [exec] configure: creating ./config.status
     [exec] config.status: creating Makefile
     [exec] config.status: creating impl/config.h
     [exec] config.status: executing depfiles commands
     [exec] config.status: executing libtool commands

compile-c++-utils:
     [exec] cd /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/c++/utils && /bin/bash /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/c++/utils/missing --run aclocal-1.10 -I m4
     [exec]  cd /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/c++/utils && /bin/bash /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/c++/utils/missing --run automake-1.10 --foreign 
     [exec] cd /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/c++/utils && /bin/bash /grid/0/hudson/hudson-slave/workspace/Hadoop-Mapreduce-22-branch/trunk/src/c++/utils/missing --run autoconf
     [exec] /bin/bash ./config.status --recheck
Build timed out. Aborting
[FINDBUGS] Skipping publisher since build result is FAILURE
Publishing Javadoc
Archiving artifacts
Recording test results
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) ##############################
No tests ran.