You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2012/08/25 08:08:29 UTC
Build failed in Jenkins: Kafka-trunk #141
See <https://builds.apache.org/job/Kafka-trunk/141/changes>
Changes:
[junrao] Require values in Utils.getTopic* methods to be positive; patched by Swapnil Ghike; reviewed by Jun Rao; KAFKA-481
------------------------------------------
[...truncated 2288 lines...]
[2012-08-25 06:07:52,155] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-08-25 06:07:52,155] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-08-25 06:07:52,156] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-08-25 06:07:52,156] INFO Closing zookeeper client... (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:52,156] INFO zkActor stopped (kafka.log.LogManager:61)
[2012-08-25 06:07:52,158] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testPartitionedSendToNewBrokerInExistingTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testDefaultPartitioner[0m
[2012-08-25 06:07:53,203] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,204] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-08-25 06:07:53,204] INFO connecting to ZK: 127.0.0.1:2182 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,221] INFO Awaiting connections on port 50701 (kafka.network.Acceptor:130)
[2012-08-25 06:07:53,222] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-08-25 06:07:53,222] INFO Registering broker /brokers/ids/0 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,227] INFO Registering broker /brokers/ids/0 succeeded with id:0,creatorId:67.195.138.60-1345874873222,host:67.195.138.60,port:50701 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,227] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-25 06:07:53,227] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,228] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,228] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-08-25 06:07:53,229] INFO connecting to ZK: 127.0.0.1:2182 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,232] INFO Awaiting connections on port 43800 (kafka.network.Acceptor:130)
[2012-08-25 06:07:53,232] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-08-25 06:07:53,233] INFO Registering broker /brokers/ids/1 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,234] INFO Registering broker /brokers/ids/1 succeeded with id:1,creatorId:67.195.138.60-1345874873233,host:67.195.138.60,port:43800 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,234] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-25 06:07:53,235] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,235] INFO Connected to localhost:50701 for producing (kafka.producer.SyncProducer:61)
[2012-08-25 06:07:53,236] INFO Connected to localhost:43800 for producing (kafka.producer.SyncProducer:61)
[2012-08-25 06:07:53,236] INFO Created log for 'test-topic'-0 (kafka.log.LogManager:61)
[2012-08-25 06:07:53,236] INFO Begin registering broker topic /brokers/topics/test-topic/0 with 4 partitions (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,236] INFO Created log for 'test-topic'-2 (kafka.log.LogManager:61)
[2012-08-25 06:07:53,237] INFO Begin registering broker topic /brokers/topics/test-topic/1 with 4 partitions (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,246] INFO End registering broker topic /brokers/topics/test-topic/0 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,246] INFO End registering broker topic /brokers/topics/test-topic/1 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,737] INFO Closing all async producers (kafka.producer.ProducerPool:61)
[2012-08-25 06:07:53,738] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,739] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-08-25 06:07:53,740] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-08-25 06:07:53,740] INFO Closing zookeeper client... (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,740] INFO zkActor stopped (kafka.log.LogManager:61)
[2012-08-25 06:07:53,750] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,751] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-08-25 06:07:53,751] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-08-25 06:07:53,752] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-08-25 06:07:53,752] INFO Closing zookeeper client... (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:53,752] INFO zkActor stopped (kafka.log.LogManager:61)
[2012-08-25 06:07:53,753] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testDefaultPartitioner[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.ProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testKafkaLog4jConfigs[0m
[2012-08-25 06:07:54,767] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-08-25 06:07:54,767] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-08-25 06:07:54,767] INFO connecting to ZK: 127.0.0.1:2182 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:54,786] INFO Awaiting connections on port 60023 (kafka.network.Acceptor:130)
[2012-08-25 06:07:54,786] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-08-25 06:07:54,786] INFO Registering broker /brokers/ids/0 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:54,794] INFO Registering broker /brokers/ids/0 succeeded with id:0,creatorId:67.195.138.60-1345874874787,host:67.195.138.60,port:60023 (kafka.server.KafkaZooKeeper:61)
[2012-08-25 06:07:54,794] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-25 06:07:54,795] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-08-25 06:07:54,795] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-08-25 06:07:54,796] INFO starting log cleaner every 300000 ms (kafka.log.LogManager:61)
[2012-08-25 06:07:54,798] INFO Awaiting connections on port 50143 (kafka.network.Acceptor:130)
[2012-08-25 06:07:54,798] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-08-25 06:07:54,798] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-25 06:07:54,799] INFO Kafka server started. (kafka.server.KafkaServer:61)
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
log4j:WARN No appenders could be found for logger (kafka.producer.ProducerPool).
log4j:WARN Please initialize the log4j system properly.
[0m[[0minfo[0m] [0mTest Passed: testKafkaLog4jConfigs[0m
[0m[[0minfo[0m] [0mTest Starting: testBrokerListLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testBrokerListLog4jAppends[0m
[0m[[0minfo[0m] [0mTest Starting: testZkConnectLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testZkConnectLog4jAppends[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.AsyncProducerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testProducerQueueSize[0m
Queue is full..
[0m[[0minfo[0m] [0mTest Passed: testProducerQueueSize[0m
[0m[[0minfo[0m] [0mTest Starting: testAddAfterQueueClosed[0m
[0m[[0minfo[0m] [0mTest Passed: testAddAfterQueueClosed[0m
[0m[[0minfo[0m] [0mTest Starting: testBatchSize[0m
[0m[[0minfo[0m] [0mTest Passed: testBatchSize[0m
[0m[[0minfo[0m] [0mTest Starting: testQueueTimeExpired[0m
[0m[[0minfo[0m] [0mTest Passed: testQueueTimeExpired[0m
[0m[[0minfo[0m] [0mTest Starting: testSenderThreadShutdown[0m
[0m[[0minfo[0m] [0mTest Passed: testSenderThreadShutdown[0m
[0m[[0minfo[0m] [0mTest Starting: testCollateEvents[0m
[0m[[0minfo[0m] [0mTest Passed: testCollateEvents[0m
[0m[[0minfo[0m] [0mTest Starting: testCollateAndSerializeEvents[0m
[0m[[0minfo[0m] [0mTest Passed: testCollateAndSerializeEvents[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.AsyncProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.AutoOffsetResetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testEarliestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testEarliestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testEarliestOffsetResetBackward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testEarliestOffsetResetBackward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testLatestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[31merror[0m] [0mTest Failed: testLatestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
junit.framework.AssertionFailedError: expected:<0> but was:<3>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:277)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:195)
at junit.framework.Assert.assertEquals(Assert.java:201)
at kafka.integration.AutoOffsetResetTest.testLatestOffsetResetForward(AutoOffsetResetTest.scala:218)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:164)
at junit.framework.TestCase.runBare(TestCase.java:130)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:120)
at junit.framework.TestSuite.runTest(TestSuite.java:228)
at junit.framework.TestSuite.run(TestSuite.java:223)
at junit.framework.TestSuite.runTest(TestSuite.java:228)
at junit.framework.TestSuite.run(TestSuite.java:223)
at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:309)
at org.scalatest.tools.ScalaTestFramework$ScalaTestRunner.run(ScalaTestFramework.scala:40)
at sbt.TestRunner.run(TestFramework.scala:53)
at sbt.TestRunner.runTest$1(TestFramework.scala:67)
at sbt.TestRunner.run(TestFramework.scala:76)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11.runTest$2(TestFramework.scala:194)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
at sbt.NamedTestTask.run(TestFramework.scala:92)
at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
at sbt.TaskManager$Task.invoke(TaskManager.scala:62)
at sbt.impl.RunTask.doRun$1(RunTask.scala:77)
at sbt.impl.RunTask.runTask(RunTask.scala:85)
at sbt.impl.RunTask.sbt$impl$RunTask$$runIfNotRoot(RunTask.scala:60)
at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
at sbt.Control$.trapUnit(Control.scala:19)
at sbt.Distributor$Run$Worker.run(ParallelRunner.scala:131)
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.AutoOffsetResetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetch(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetch(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchWithCompression(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchWithCompression(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduce(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduce(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduceWithCompression(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduceWithCompression(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testDefaultEncoderProducerAndFetch(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testDefaultEncoderProducerAndFetch(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testDefaultEncoderProducerAndFetchWithCompression(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testDefaultEncoderProducerAndFetchWithCompression(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testConsumerNotExistTopic(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testConsumerNotExistTopic(kafka.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.producer.ProducerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testSend[0m
[0m[[0minfo[0m] [0mTest Passed: testSend[0m
[0m[[0minfo[0m] [0mTest Starting: testSendSingleMessage[0m
[0m[[0minfo[0m] [0mTest Passed: testSendSingleMessage[0m
[0m[[0minfo[0m] [0mTest Starting: testInvalidPartition[0m
[0m[[0minfo[0m] [0mTest Passed: testInvalidPartition[0m
[0m[[0minfo[0m] [0mTest Starting: testSyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Passed: testSyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Starting: testAsyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Passed: testAsyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Starting: testSyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Passed: testSyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Starting: testAsyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Passed: testAsyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Starting: testConfigBrokerPartitionInfoWithPartitioner[0m
[0m[[0minfo[0m] [0mTest Passed: testConfigBrokerPartitionInfoWithPartitioner[0m
[0m[[0minfo[0m] [0mTest Starting: testConfigBrokerPartitionInfo[0m
[0m[[0minfo[0m] [0mTest Passed: testConfigBrokerPartitionInfo[0m
[0m[[0minfo[0m] [0mTest Starting: testZKSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Passed: testZKSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testZKSendWithDeadBroker[0m
[0m[[0minfo[0m] [0mTest Passed: testZKSendWithDeadBroker[0m
[0m[[0minfo[0m] [0mTest Starting: testPartitionedSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Passed: testPartitionedSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testPartitionedSendToNewBrokerInExistingTopic[0m
[0m[[0minfo[0m] [0mTest Passed: testPartitionedSendToNewBrokerInExistingTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testDefaultPartitioner[0m
[0m[[0minfo[0m] [0mTest Passed: testDefaultPartitioner[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.producer.ProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.FileMessageSetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Passed: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testWriteTo[0m
[0m[[0minfo[0m] [0mTest Passed: testWriteTo[0m
[0m[[0minfo[0m] [0mTest Starting: testFileSize[0m
[0m[[0minfo[0m] [0mTest Passed: testFileSize[0m
[0m[[0minfo[0m] [0mTest Starting: testIterationOverPartialAndTruncation[0m
[0m[[0minfo[0m] [0mTest Passed: testIterationOverPartialAndTruncation[0m
[0m[[0minfo[0m] [0mTest Starting: testIterationDoesntChangePosition[0m
[0m[[0minfo[0m] [0mTest Passed: testIterationDoesntChangePosition[0m
[0m[[0minfo[0m] [0mTest Starting: testRead[0m
[0m[[0minfo[0m] [0mTest Passed: testRead[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.FileMessageSetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.CompressionUtilTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testSimpleCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Passed: testSimpleCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Starting: testComplexCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Passed: testComplexCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Starting: testSnappyCompressDecompressExplicit[0m
[0m[[0minfo[0m] [0mTest Passed: testSnappyCompressDecompressExplicit[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.CompressionUtilTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogCorruptionTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testMessageSizeTooLarge(kafka.log.LogCorruptionTest)[0m
This is good
[0m[[0minfo[0m] [0mTest Passed: testMessageSizeTooLarge(kafka.log.LogCorruptionTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogCorruptionTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [0mDeleting directory /tmp/sbt_dc8c451e[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[31merror[0m] [0mFailed: : Total 140, Failed 2, Errors 0, Passed 138, Skipped 0[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[31merror[0m] [0mError running kafka.zk.ZKEphemeralTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running kafka.integration.AutoOffsetResetTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running test: One or more subtasks failed[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal time: 131 s, completed Aug 25, 2012 6:08:28 AM[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal session time: 132 s, completed Aug 25, 2012 6:08:28 AM[0m
[0m[[31merror[0m] [0mError during build.[0m
Build step 'Execute shell' marked build as failure
Jenkins build is back to normal : Kafka-trunk #146
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Kafka-trunk/146/changes>
Build failed in Jenkins: Kafka-trunk #145
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Kafka-trunk/145/changes>
Changes:
[joestein] KAFKA-533 changes to NOTICE and LICENSE related to KAFKA-534 removing client libraries from repo
[joestein] KAFKA-534 remove client library directory
------------------------------------------
[...truncated 1357 lines...]
[0m[[0minfo[0m] [0mTest Starting: testRead[0m
[0m[[0minfo[0m] [0mTest Passed: testRead[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.FileMessageSetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-27 01:40:14,424] WARN Wrong partition -1 valid partitions (0,0) (kafka.log.LogManager:73)
[2012-09-27 01:40:14,424] WARN Wrong partition -1 valid partitions (0,0) (kafka.log.LogManager:73)
[2012-09-27 01:40:14,424] WARN Wrong partition -1 valid partitions (0,0) (kafka.log.LogManager:73)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-27 01:40:14,634] WARN Wrong partition -1 valid partitions (0,0) (kafka.log.LogManager:73)
[2012-09-27 01:40:14,634] WARN Wrong partition -1 valid partitions (0,0) (kafka.log.LogManager:73)
[2012-09-27 01:40:14,634] WARN Wrong partition -1 valid partitions (0,0) (kafka.log.LogManager:73)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduce(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduce(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduceWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-27 01:40:14,993] WARN Session 0x13a056187c60009 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[0m[[0minfo[0m] [0mTest Passed: testMultiProduceWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-27 01:40:15,411] WARN Session 0x13a056179d9000a for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-27 01:40:15,701] WARN Session 0x13a056199810004 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-27 01:40:15,719] WARN Session 0x13a056199810005 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-27 01:40:15,734] WARN Session 0x13a056187c60006 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchJava(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchJava(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchJavaWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchJavaWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.server.ServerShutdownTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testCleanShutdown[0m
[2012-09-27 01:40:16,558] WARN Session 0x13a056187c60009 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[0m[[0minfo[0m] [0mTest Passed: testCleanShutdown[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.server.ServerShutdownTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testKafkaLog4jConfigs[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
log4j:WARN No appenders could be found for logger (org.I0Itec.zkclient.ZkEventThread).
log4j:WARN Please initialize the log4j system properly.
[0m[[0minfo[0m] [0mTest Passed: testKafkaLog4jConfigs[0m
[0m[[0minfo[0m] [0mTest Starting: testBrokerListLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testBrokerListLog4jAppends[0m
[0m[[0minfo[0m] [0mTest Starting: testZkConnectLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testZkConnectLog4jAppends[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKEphemeralTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testEphemeralNodeCleanup(kafka.zk.ZKEphemeralTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testEphemeralNodeCleanup(kafka.zk.ZKEphemeralTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKEphemeralTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / unit.kafka.producer.ProducerMethodsTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: producerThrowsNoBrokersException[0m
[0m[[0minfo[0m] [0mTest Passed: producerThrowsNoBrokersException[0m
[0m[[0minfo[0m] [34m== core-kafka / unit.kafka.producer.ProducerMethodsTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.MessageTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testFieldValues[0m
[0m[[0minfo[0m] [0mTest Passed: testFieldValues[0m
[0m[[0minfo[0m] [0mTest Starting: testChecksum[0m
[0m[[0minfo[0m] [0mTest Passed: testChecksum[0m
[0m[[0minfo[0m] [0mTest Starting: testEquality[0m
[0m[[0minfo[0m] [0mTest Passed: testEquality[0m
[0m[[0minfo[0m] [0mTest Starting: testIsHashable[0m
[0m[[0minfo[0m] [0mTest Passed: testIsHashable[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.MessageTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKLoadBalanceTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testLoadBalance(kafka.zk.ZKLoadBalanceTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testLoadBalance(kafka.zk.ZKLoadBalanceTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKLoadBalanceTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogManagerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testCreateLog[0m
[0m[[0minfo[0m] [0mTest Passed: testCreateLog[0m
[0m[[0minfo[0m] [0mTest Starting: testGetLog[0m
[0m[[0minfo[0m] [0mTest Passed: testGetLog[0m
[0m[[0minfo[0m] [0mTest Starting: testInvalidTopicName[0m
[0m[[0minfo[0m] [0mTest Passed: testInvalidTopicName[0m
[0m[[0minfo[0m] [0mTest Starting: testCleanupExpiredSegments[0m
[0m[[0minfo[0m] [0mTest Passed: testCleanupExpiredSegments[0m
[0m[[0minfo[0m] [0mTest Starting: testCleanupSegmentsToMaintainSize[0m
[0m[[0minfo[0m] [0mTest Passed: testCleanupSegmentsToMaintainSize[0m
[0m[[0minfo[0m] [0mTest Starting: testTimeBasedFlush[0m
[0m[[0minfo[0m] [0mTest Passed: testTimeBasedFlush[0m
[0m[[0minfo[0m] [0mTest Starting: testConfigurablePartitions[0m
[0m[[0minfo[0m] [0mTest Passed: testConfigurablePartitions[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogManagerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.SyncProducerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testReachableServer[0m
[0m[[0minfo[0m] [0mTest Passed: testReachableServer[0m
[0m[[0minfo[0m] [0mTest Starting: testSingleMessageSizeTooLarge[0m
[0m[[0minfo[0m] [0mTest Passed: testSingleMessageSizeTooLarge[0m
[0m[[0minfo[0m] [0mTest Starting: testCompressedMessageSizeTooLarge[0m
[0m[[0minfo[0m] [0mTest Passed: testCompressedMessageSizeTooLarge[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.SyncProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.consumer.FetcherTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testFetcher(kafka.consumer.FetcherTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testFetcher(kafka.consumer.FetcherTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.consumer.FetcherTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.SegmentListTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testAppend[0m
[0m[[0minfo[0m] [0mTest Passed: testAppend[0m
[0m[[0minfo[0m] [0mTest Starting: testTrunc[0m
[0m[[0minfo[0m] [0mTest Passed: testTrunc[0m
[0m[[0minfo[0m] [0mTest Starting: testTruncBeyondList[0m
[0m[[0minfo[0m] [0mTest Passed: testTruncBeyondList[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.SegmentListTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.utils.UtilsTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testSwallow[0m
[0m[[0minfo[0m] [0mTest Passed: testSwallow[0m
[0m[[0minfo[0m] [0mTest Starting: testCircularIterator[0m
[0m[[0minfo[0m] [0mTest Passed: testCircularIterator[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.utils.UtilsTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.message.ByteBufferMessageSetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Passed: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testValidBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testValidBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testEquals[0m
[0m[[0minfo[0m] [0mTest Passed: testEquals[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistentWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistentWithCompression[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Starting: testValidBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testValidBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Starting: testEqualsWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testEqualsWithCompression[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.message.ByteBufferMessageSetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.network.SocketServerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: simpleRequest[0m
[0m[[0minfo[0m] [0mTest Passed: simpleRequest[0m
[0m[[0minfo[0m] [0mTest Starting: tooBigRequestIsRejected[0m
[0m[[0minfo[0m] [0mTest Passed: tooBigRequestIsRejected[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.network.SocketServerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[31merror[0m] [0mFailed: : Total 141, Failed 1, Errors 0, Passed 140, Skipped 0[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [0mDeleting directory /tmp/sbt_a07426d7[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== perf / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== perf / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== java-examples / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== java-examples / test-compile ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== perf / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== perf / test-compile ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop producer / copy-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop producer / copy-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop producer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop producer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== java-examples / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== java-examples / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== java-examples / copy-resources ==[0m
[0m[[0minfo[0m] [34m== java-examples / copy-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop producer / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== hadoop producer / test-compile ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== hadoop consumer / test-compile ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== perf / copy-resources ==[0m
[0m[[0minfo[0m] [34m== perf / copy-resources ==[0m
[0m[[31merror[0m] [0mError running kafka.integration.AutoOffsetResetTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running test: One or more subtasks failed[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal time: 142 s, completed Sep 27, 2012 1:40:33 AM[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal session time: 142 s, completed Sep 27, 2012 1:40:33 AM[0m
[0m[[31merror[0m] [0mError during build.[0m
Build step 'Execute shell' marked build as failure
Build failed in Jenkins: Kafka-trunk #144
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Kafka-trunk/144/changes>
Changes:
[junrao] TopicCount.constructTopicCount isn't thread-safe; patched by Jun Rao; reviewed by Joel Koshy; KAFKA-379
------------------------------------------
[...truncated 420 lines...]
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.FileMessageSetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Passed: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testWriteTo[0m
[0m[[0minfo[0m] [0mTest Passed: testWriteTo[0m
[0m[[0minfo[0m] [0mTest Starting: testFileSize[0m
[0m[[0minfo[0m] [0mTest Passed: testFileSize[0m
[0m[[0minfo[0m] [0mTest Starting: testIterationOverPartialAndTruncation[0m
[0m[[0minfo[0m] [0mTest Passed: testIterationOverPartialAndTruncation[0m
[0m[[0minfo[0m] [0mTest Starting: testIterationDoesntChangePosition[0m
[0m[[0minfo[0m] [0mTest Passed: testIterationDoesntChangePosition[0m
[0m[[0minfo[0m] [0mTest Starting: testRead[0m
[0m[[0minfo[0m] [0mTest Passed: testRead[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.FileMessageSetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.CompressionUtilTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testSimpleCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Passed: testSimpleCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Starting: testComplexCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Passed: testComplexCompressDecompress[0m
[0m[[0minfo[0m] [0mTest Starting: testSnappyCompressDecompressExplicit[0m
[0m[[0minfo[0m] [0mTest Passed: testSnappyCompressDecompressExplicit[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.CompressionUtilTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testTimeBasedLogRoll[0m
[0m[[0minfo[0m] [0mTest Passed: testTimeBasedLogRoll[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeBasedLogRoll[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeBasedLogRoll[0m
[0m[[0minfo[0m] [0mTest Starting: testLoadEmptyLog[0m
[0m[[0minfo[0m] [0mTest Passed: testLoadEmptyLog[0m
[0m[[0minfo[0m] [0mTest Starting: testLoadInvalidLogsFails[0m
[0m[[0minfo[0m] [0mTest Passed: testLoadInvalidLogsFails[0m
[0m[[0minfo[0m] [0mTest Starting: testAppendAndRead[0m
[0m[[0minfo[0m] [0mTest Passed: testAppendAndRead[0m
[0m[[0minfo[0m] [0mTest Starting: testReadOutOfRange[0m
[0m[[0minfo[0m] [0mTest Passed: testReadOutOfRange[0m
[0m[[0minfo[0m] [0mTest Starting: testLogRolls[0m
[0m[[0minfo[0m] [0mTest Passed: testLogRolls[0m
[0m[[0minfo[0m] [0mTest Starting: testFindSegment[0m
[0m[[0minfo[0m] [0mTest Passed: testFindSegment[0m
[0m[[0minfo[0m] [0mTest Starting: testEdgeLogRolls[0m
[0m[[0minfo[0m] [0mTest Passed: testEdgeLogRolls[0m
[0m[[0minfo[0m] [0mTest Starting: testMessageSizeCheck[0m
[0m[[0minfo[0m] [0mTest Passed: testMessageSizeCheck[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.consumer.TopicFilterTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testWhitelists[0m
[0m[[0minfo[0m] [0mTest Passed: testWhitelists[0m
[0m[[0minfo[0m] [0mTest Starting: testBlacklists[0m
[0m[[0minfo[0m] [0mTest Passed: testBlacklists[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.consumer.TopicFilterTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduce(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduce(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduceWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduceWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchJava(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchJava(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchJavaWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchJavaWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.consumer.ZookeeperConsumerConnectorTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testBasic(kafka.javaapi.consumer.ZookeeperConsumerConnectorTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testBasic(kafka.javaapi.consumer.ZookeeperConsumerConnectorTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.consumer.ZookeeperConsumerConnectorTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.ByteBufferMessageSetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Passed: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testWriteTo[0m
[0m[[0minfo[0m] [0mTest Passed: testWriteTo[0m
[0m[[0minfo[0m] [0mTest Starting: testSmallFetchSize[0m
[0m[[0minfo[0m] [0mTest Passed: testSmallFetchSize[0m
[0m[[0minfo[0m] [0mTest Starting: testValidBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testValidBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testEquals[0m
[0m[[0minfo[0m] [0mTest Passed: testEquals[0m
[0m[[0minfo[0m] [0mTest Starting: testIterator[0m
[0m[[0minfo[0m] [0mTest Passed: testIterator[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.ByteBufferMessageSetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.AutoOffsetResetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testEarliestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testEarliestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testEarliestOffsetResetBackward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testEarliestOffsetResetBackward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testLatestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testLatestOffsetResetForward(kafka.integration.AutoOffsetResetTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.AutoOffsetResetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.message.ByteBufferMessageSetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Passed: testWrittenEqualsRead[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistent[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testValidBytes[0m
[0m[[0minfo[0m] [0mTest Passed: testValidBytes[0m
[0m[[0minfo[0m] [0mTest Starting: testEquals[0m
[0m[[0minfo[0m] [0mTest Passed: testEquals[0m
[0m[[0minfo[0m] [0mTest Starting: testIteratorIsConsistentWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testIteratorIsConsistentWithCompression[0m
[0m[[0minfo[0m] [0mTest Starting: testSizeInBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testSizeInBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Starting: testValidBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testValidBytesWithCompression[0m
[0m[[0minfo[0m] [0mTest Starting: testEqualsWithCompression[0m
[0m[[0minfo[0m] [0mTest Passed: testEqualsWithCompression[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.message.ByteBufferMessageSetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.server.ServerShutdownTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testCleanShutdown[0m
[0m[[0minfo[0m] [0mTest Passed: testCleanShutdown[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.server.ServerShutdownTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.SyncProducerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testReachableServer[0m
[0m[[0minfo[0m] [0mTest Passed: testReachableServer[0m
[0m[[0minfo[0m] [0mTest Starting: testSingleMessageSizeTooLarge[0m
[0m[[0minfo[0m] [0mTest Passed: testSingleMessageSizeTooLarge[0m
[0m[[0minfo[0m] [0mTest Starting: testCompressedMessageSizeTooLarge[0m
[0m[[0minfo[0m] [0mTest Passed: testCompressedMessageSizeTooLarge[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.producer.SyncProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogOffsetTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testEmptyLogs[0m
[0m[[0minfo[0m] [0mTest Passed: testEmptyLogs[0m
[0m[[0minfo[0m] [0mTest Starting: testGetOffsetsBeforeLatestTime[0m
[0m[[0minfo[0m] [0mTest Passed: testGetOffsetsBeforeLatestTime[0m
[0m[[0minfo[0m] [0mTest Starting: testEmptyLogsGetOffsets[0m
[0m[[0minfo[0m] [0mTest Passed: testEmptyLogsGetOffsets[0m
[0m[[0minfo[0m] [0mTest Starting: testGetOffsetsBeforeNow[0m
[0m[[0minfo[0m] [0mTest Passed: testGetOffsetsBeforeNow[0m
[0m[[0minfo[0m] [0mTest Starting: testGetOffsetsBeforeEarliestTime[0m
[0m[[0minfo[0m] [0mTest Passed: testGetOffsetsBeforeEarliestTime[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogOffsetTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKEphemeralTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testEphemeralNodeCleanup(kafka.zk.ZKEphemeralTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testEphemeralNodeCleanup(kafka.zk.ZKEphemeralTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKEphemeralTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.BackwardsCompatibilityTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testProtocolVersion0(kafka.integration.BackwardsCompatibilityTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProtocolVersion0(kafka.integration.BackwardsCompatibilityTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.BackwardsCompatibilityTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.MessageTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testFieldValues[0m
[0m[[0minfo[0m] [0mTest Passed: testFieldValues[0m
[0m[[0minfo[0m] [0mTest Starting: testChecksum[0m
[0m[[0minfo[0m] [0mTest Passed: testChecksum[0m
[0m[[0minfo[0m] [0mTest Starting: testEquality[0m
[0m[[0minfo[0m] [0mTest Passed: testEquality[0m
[0m[[0minfo[0m] [0mTest Starting: testIsHashable[0m
[0m[[0minfo[0m] [0mTest Passed: testIsHashable[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.message.MessageTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.producer.ProducerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testSend[0m
[0m[[0minfo[0m] [0mTest Passed: testSend[0m
[0m[[0minfo[0m] [0mTest Starting: testSendSingleMessage[0m
[0m[[0minfo[0m] [0mTest Passed: testSendSingleMessage[0m
[0m[[0minfo[0m] [0mTest Starting: testInvalidPartition[0m
[0m[[0minfo[0m] [0mTest Passed: testInvalidPartition[0m
[0m[[0minfo[0m] [0mTest Starting: testSyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Passed: testSyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Starting: testAsyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Passed: testAsyncProducerPool[0m
[0m[[0minfo[0m] [0mTest Starting: testSyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Passed: testSyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Starting: testAsyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Passed: testAsyncUnavailableProducerException[0m
[0m[[0minfo[0m] [0mTest Starting: testConfigBrokerPartitionInfoWithPartitioner[0m
[0m[[0minfo[0m] [0mTest Passed: testConfigBrokerPartitionInfoWithPartitioner[0m
[0m[[0minfo[0m] [0mTest Starting: testConfigBrokerPartitionInfo[0m
[0m[[0minfo[0m] [0mTest Passed: testConfigBrokerPartitionInfo[0m
[0m[[0minfo[0m] [0mTest Starting: testZKSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Passed: testZKSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testZKSendWithDeadBroker[0m
[0m[[0minfo[0m] [0mTest Passed: testZKSendWithDeadBroker[0m
[0m[[0minfo[0m] [0mTest Starting: testPartitionedSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Passed: testPartitionedSendToNewTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testPartitionedSendToNewBrokerInExistingTopic[0m
[0m[[0minfo[0m] [0mTest Passed: testPartitionedSendToNewBrokerInExistingTopic[0m
[0m[[0minfo[0m] [0mTest Starting: testDefaultPartitioner[0m
[0m[[0minfo[0m] [0mTest Passed: testDefaultPartitioner[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.producer.ProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.consumer.FetcherTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testFetcher(kafka.consumer.FetcherTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testFetcher(kafka.consumer.FetcherTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.consumer.FetcherTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [0mDeleting directory /tmp/sbt_aa7c472d[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[31merror[0m] [0mFailed: : Total 141, Failed 1, Errors 0, Passed 140, Skipped 0[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== perf / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== perf / test-compile ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-resources ==[0m
[0m[[31merror[0m] [0mError running kafka.producer.ProducerTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running compile: javac returned nonzero exit code[0m
[0m[[31merror[0m] [0mError running compile: javac returned nonzero exit code[0m
[0m[[31merror[0m] [0mError running test: One or more subtasks failed[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal time: 137 s, completed Sep 18, 2012 6:04:25 AM[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal session time: 137 s, completed Sep 18, 2012 6:04:25 AM[0m
[0m[[31merror[0m] [0mError during build.[0m
Build step 'Execute shell' marked build as failure
Build failed in Jenkins: Kafka-trunk #143
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Kafka-trunk/143/changes>
Changes:
[junrao] Handle topic names with / on Kafka server; patched by Swapnil Ghike; reviewed by Jay Kreps and Jun Rao; kafka-495
------------------------------------------
[...truncated 2261 lines...]
[2012-09-07 04:19:08,269] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:08,270] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduce(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-07 04:19:08,271] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,271] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:08,273] INFO Awaiting connections on port 9999 (kafka.network.Acceptor:130)
[2012-09-07 04:19:08,274] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:08,274] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:08,274] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,276] INFO Connected to localhost:9999 for producing (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:08,277] INFO Created log for 'test3'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,280] INFO Created log for 'test2'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,283] INFO Created log for 'test1'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,418] WARN Session 0x1399ef3d5c10005 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:08,479] INFO Disconnecting from localhost:9999 (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:08,479] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,479] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:08,479] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:08,480] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:08,481] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:08,481] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testMultiProduce(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduceWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-07 04:19:08,482] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,483] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:08,484] INFO Awaiting connections on port 9999 (kafka.network.Acceptor:130)
[2012-09-07 04:19:08,485] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:08,485] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:08,486] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,487] INFO Connected to localhost:9999 for producing (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:08,488] INFO Created log for 'test3'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,489] WARN Session 0x1399ef3ded30002 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:08,492] INFO Created log for 'test2'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,497] INFO Created log for 'test1'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,639] WARN Session 0x1399ef3d5c10003 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:08,691] INFO Disconnecting from localhost:9999 (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:08,691] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,691] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:08,691] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:08,691] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:08,692] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:08,693] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testMultiProduceWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-07 04:19:08,694] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,694] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:08,696] INFO Awaiting connections on port 9999 (kafka.network.Acceptor:130)
[2012-09-07 04:19:08,697] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:08,697] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:08,697] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:08,698] INFO Connected to localhost:9999 for producing (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:08,699] INFO Created log for 'test'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:08,723] WARN Session 0x1399ef3aa650007 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:09,100] INFO Disconnecting from localhost:9999 (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,100] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,100] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,100] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,100] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,102] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,103] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetch(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-07 04:19:09,104] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,104] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:09,106] INFO Awaiting connections on port 9999 (kafka.network.Acceptor:130)
[2012-09-07 04:19:09,106] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:09,107] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:09,107] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,107] INFO Connected to localhost:9999 for producing (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,108] INFO Created log for 'test'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,253] WARN Session 0x1399ef3d5c10002 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:09,509] INFO Disconnecting from localhost:9999 (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,510] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,510] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,510] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,510] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,511] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,512] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetchWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchJava(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-07 04:19:09,512] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,513] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:09,514] INFO Awaiting connections on port 9999 (kafka.network.Acceptor:130)
[2012-09-07 04:19:09,515] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:09,515] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:09,516] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,517] INFO Connected to localhost:9999 for producing (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,518] INFO Created log for 'test1'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,547] INFO Created log for 'test2'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,551] INFO Created log for 'test3'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,641] WARN Session 0x1399ef3d5c10004 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:09,720] INFO Disconnecting from localhost:9999 (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,720] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,720] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,720] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,720] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,722] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,722] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchJava(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetchJavaWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[2012-09-07 04:19:09,724] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,724] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:09,726] INFO Awaiting connections on port 9999 (kafka.network.Acceptor:130)
[2012-09-07 04:19:09,727] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:09,727] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:09,728] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,729] INFO Connected to localhost:9999 for producing (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,730] INFO Created log for 'test1'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,767] INFO Created log for 'test2'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,796] WARN Session 0x1399ef3d5c10005 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:09,818] INFO Created log for 'test3'-0 (kafka.log.LogManager:61)
[2012-09-07 04:19:09,884] WARN Session 0x1399ef3d5c10003 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn:1188)
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1146)
[2012-09-07 04:19:09,933] INFO Disconnecting from localhost:9999 (kafka.producer.SyncProducer:61)
[2012-09-07 04:19:09,933] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,934] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,933] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-09-07 04:19:09,934] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,935] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-09-07 04:19:09,935] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetchJavaWithCompression(kafka.javaapi.integration.PrimitiveApiTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.javaapi.integration.PrimitiveApiTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testKafkaLog4jConfigs[0m
[2012-09-07 04:19:09,946] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,946] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:09,947] INFO connecting to ZK: 127.0.0.1:2182 (kafka.server.KafkaZooKeeper:61)
[2012-09-07 04:19:09,958] INFO Awaiting connections on port 46964 (kafka.network.Acceptor:130)
[2012-09-07 04:19:09,959] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:09,959] INFO Registering broker /brokers/ids/0 (kafka.server.KafkaZooKeeper:61)
[2012-09-07 04:19:09,967] INFO Registering broker /brokers/ids/0 succeeded with id:0,creatorId:67.195.138.60-1346991549959,host:67.195.138.60,port:46964 (kafka.server.KafkaZooKeeper:61)
[2012-09-07 04:19:09,967] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:09,967] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,968] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-09-07 04:19:09,968] INFO starting log cleaner every 300000 ms (kafka.log.LogManager:61)
[2012-09-07 04:19:09,970] INFO Awaiting connections on port 40868 (kafka.network.Acceptor:130)
[2012-09-07 04:19:09,971] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-09-07 04:19:09,971] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-09-07 04:19:09,971] INFO Kafka server started. (kafka.server.KafkaServer:61)
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
log4j:WARN No appenders could be found for logger (kafka.producer.ProducerPool).
log4j:WARN Please initialize the log4j system properly.
[0m[[0minfo[0m] [0mTest Passed: testKafkaLog4jConfigs[0m
[0m[[0minfo[0m] [0mTest Starting: testBrokerListLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testBrokerListLog4jAppends[0m
[0m[[0minfo[0m] [0mTest Starting: testZkConnectLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testZkConnectLog4jAppends[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.LazyInitProducerTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndMultiFetch(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndMultiFetch(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduce(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduce(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testProduceAndFetch(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testProduceAndFetch(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Starting: testMultiProduceResend(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [0mTest Passed: testMultiProduceResend(kafka.integration.LazyInitProducerTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.integration.LazyInitProducerTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.utils.UtilsTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testSwallow[0m
[0m[[0minfo[0m] [0mTest Passed: testSwallow[0m
[0m[[0minfo[0m] [0mTest Starting: testCircularIterator[0m
[0m[[0minfo[0m] [0mTest Passed: testCircularIterator[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.utils.UtilsTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[31merror[0m] [0mFailed: : Total 141, Failed 1, Errors 0, Passed 140, Skipped 0[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [0mDeleting directory /tmp/sbt_fb4e25f5[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== java-examples / copy-resources ==[0m
[0m[[0minfo[0m] [34m== java-examples / copy-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== java-examples / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m== java-examples / copy-test-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== hadoop consumer / test-compile ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop producer / copy-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop producer / copy-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-resources ==[0m
[0m[[0minfo[0m] [34m== hadoop consumer / copy-resources ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== java-examples / test-compile ==[0m
[0m[[0minfo[0m] [0m Source analysis: 0 new/modified, 0 indirectly invalidated, 0 removed.[0m
[0m[[0minfo[0m] [0mCompiling test sources...[0m
[0m[[0minfo[0m] [0mNothing to compile.[0m
[0m[[0minfo[0m] [0m Post-analysis: 0 classes.[0m
[0m[[0minfo[0m] [34m== java-examples / test-compile ==[0m
[0m[[31merror[0m] [0mError running kafka.integration.AutoOffsetResetTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running test: One or more subtasks failed[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal time: 121 s, completed Sep 7, 2012 4:19:17 AM[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal session time: 121 s, completed Sep 7, 2012 4:19:17 AM[0m
[0m[[31merror[0m] [0mError during build.[0m
Build step 'Execute shell' marked build as failure
Build failed in Jenkins: Kafka-trunk #142
Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Kafka-trunk/142/changes>
Changes:
[junrao] Message size not checked at the server (patch v3); patched by Swapnil Ghike; reviewed by Jun Rao; KAFKA-469
------------------------------------------
[...truncated 2359 lines...]
[2012-08-31 05:21:57,528] INFO Registering broker /brokers/ids/0 succeeded with id:0,creatorId:localhost-1346408517439,host:localhost,port:52176 (kafka.server.KafkaZooKeeper:61)
[2012-08-31 05:21:57,532] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-31 05:21:57,536] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-08-31 05:21:57,542] INFO Connected to localhost:52176 for producing (kafka.producer.SyncProducer:61)
[2012-08-31 05:21:57,549] INFO Created log for 'test'-0 (kafka.log.LogManager:61)
[2012-08-31 05:21:57,551] INFO Begin registering broker topic /brokers/topics/test/0 with 1 partitions (kafka.server.KafkaZooKeeper:61)
[2012-08-31 05:21:57,644] INFO End registering broker topic /brokers/topics/test/0 (kafka.server.KafkaZooKeeper:61)
This is good
[2012-08-31 05:21:58,306] INFO group1_consumer1 Connecting to zookeeper instance at 127.0.0.1:2182 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,328] INFO group1_consumer1 starting auto committer every 1000 ms (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,333] INFO group1_consumer1 begin registering consumer group1_consumer1 in ZK (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,394] INFO group1_consumer1 end registering consumer group1_consumer1 in ZK (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,400] INFO group1_consumer1 starting watcher executor thread for consumer group1_consumer1 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,406] INFO group1_consumer1 begin rebalancing consumer group1_consumer1 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,457] INFO Cleared all relevant queues for this fetcher (kafka.consumer.Fetcher:61)
[2012-08-31 05:21:58,462] INFO Clearing the current data chunk for this consumer iterator (kafka.consumer.ConsumerIterator:61)
[2012-08-31 05:21:58,468] INFO Cleared the data chunks in all the consumer message iterators (kafka.consumer.Fetcher:61)
[2012-08-31 05:21:58,472] INFO group1_consumer1 Committing all offsets after clearing the fetcher queues (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,476] INFO group1_consumer1 Releasing partition ownership (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,481] INFO group1_consumer1 Consumer group1_consumer1 rebalancing the following partitions: List(0-0) for topic test with consumers: List(group1_consumer1-0) (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,485] INFO group1_consumer1 group1_consumer1-0 attempting to claim partition 0-0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,497] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-08-31 05:21:58,552] INFO group1_consumer1 group1_consumer1-0 successfully owned partition 0-0 for topic test (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,556] INFO group1_consumer1 Updating the cache (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,566] INFO group1_consumer1 Consumer group1_consumer1 selected partitions : test:0-0: fetched offset = 0: consumed offset = 0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,571] INFO group1_consumer1 end rebalancing consumer group1_consumer1 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,579] INFO group1_consumer1 ZKConsumerConnector shutting down (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,582] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-08-31 05:21:58,583] INFO force shutdown scheduler Kafka-consumer-autocommit- (kafka.utils.KafkaScheduler:61)
[2012-08-31 05:21:58,636] INFO group1_consumer1 ZKConsumerConnector shut down completed (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,640] INFO Disconnecting from localhost:52176 (kafka.producer.SyncProducer:61)
[2012-08-31 05:21:58,644] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-08-31 05:21:58,646] INFO Shutting down Kafka server (kafka.server.KafkaServer:61)
[2012-08-31 05:21:58,648] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor:223)
[2012-08-31 05:21:58,651] INFO ZK expired; release old list of broker partitions for topics (kafka.producer.ZKBrokerPartitionInfo:61)
[2012-08-31 05:21:58,650] INFO shutdown scheduler kafka-logcleaner- (kafka.utils.KafkaScheduler:61)
[2012-08-31 05:21:58,658] INFO shutdown scheduler kafka-logflusher- (kafka.utils.KafkaScheduler:61)
[2012-08-31 05:21:58,691] INFO Closing zookeeper client... (kafka.server.KafkaZooKeeper:61)
[2012-08-31 05:21:58,692] INFO zkActor stopped (kafka.log.LogManager:61)
[2012-08-31 05:21:58,711] INFO Kafka server shut down completed (kafka.server.KafkaServer:61)
[0m[[0minfo[0m] [0mTest Passed: testMessageSizeTooLarge(kafka.log.LogCorruptionTest)[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log.LogCorruptionTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKLoadBalanceTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testLoadBalance(kafka.zk.ZKLoadBalanceTest)[0m
[2012-08-31 05:21:58,926] INFO group1_consumer1 Connecting to zookeeper instance at 127.0.0.1:2182 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,960] INFO group1_consumer1 starting auto committer every 1000 ms (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:58,965] INFO group1_consumer1 begin registering consumer group1_consumer1 in ZK (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,053] INFO group1_consumer1 end registering consumer group1_consumer1 in ZK (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,058] INFO group1_consumer1 starting watcher executor thread for consumer group1_consumer1 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,065] INFO group1_consumer1 begin rebalancing consumer group1_consumer1 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,113] INFO group1_consumer1 Releasing partition ownership (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,118] INFO group1_consumer1 Consumer group1_consumer1 rebalancing the following partitions: List(400-0) for topic topic1 with consumers: List(group1_consumer1-0) (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,122] INFO group1_consumer1 group1_consumer1-0 attempting to claim partition 400-0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:21:59,402] INFO group1_consumer1 stopping watcher executor thread for consumer group1_consumer1 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,392] ERROR group1_consumer1 error in earliestOrLatestOffset() (kafka.consumer.ZookeeperConsumerConnector:89)
java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:364)
at sun.nio.ch.Net.connect(Net.java:356)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:623)
at kafka.consumer.SimpleConsumer.connect(SimpleConsumer.scala:49)
at kafka.consumer.SimpleConsumer.getOrMakeConnection(SimpleConsumer.scala:186)
at kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:149)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$earliestOrLatestOffset(ZookeeperConsumerConnector.scala:329)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$addPartitionTopicInfo(ZookeeperConsumerConnector.scala:637)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$1$$anonfun$apply$11$$anonfun$apply$13.apply(ZookeeperConsumerConnector.scala:523)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$1$$anonfun$apply$11$$anonfun$apply$13.apply(ZookeeperConsumerConnector.scala:520)
at scala.collection.immutable.Range$ByOne$class.foreach(Range.scala:285)
at scala.collection.immutable.Range$$anon$2.foreach(Range.scala:265)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$1$$anonfun$apply$11.apply(ZookeeperConsumerConnector.scala:520)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$1$$anonfun$apply$11.apply(ZookeeperConsumerConnector.scala:507)
at scala.collection.mutable.HashSet.foreach(HashSet.scala:61)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$1.apply(ZookeeperConsumerConnector.scala:507)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance$1.apply(ZookeeperConsumerConnector.scala:494)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:80)
at scala.collection.Iterator$class.foreach(Iterator.scala:631)
at scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:161)
at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:194)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:80)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.kafka$consumer$ZookeeperConsumerConnector$ZKRebalancerListener$$rebalance(ZookeeperConsumerConnector.scala:494)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anonfun$syncedRebalance$1.apply$mcVI$sp(ZookeeperConsumerConnector.scala:449)
at scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
at scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(Range.scala:265)
at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:444)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$reinitializeConsumer(ZookeeperConsumerConnector.scala:733)
at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:207)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:137)
at kafka.zk.ZKLoadBalanceTest.testLoadBalance(ZKLoadBalanceTest.scala:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at junit.framework.TestCase.runTest(TestCase.java:164)
at junit.framework.TestCase.runBare(TestCase.java:130)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:120)
at junit.framework.TestSuite.runTest(TestSuite.java:228)
at junit.framework.TestSuite.run(TestSuite.java:223)
at junit.framework.TestSuite.runTest(TestSuite.java:228)
at junit.framework.TestSuite.run(TestSuite.java:223)
at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:309)
at org.scalatest.tools.ScalaTestFramework$ScalaTestRunner.run(ScalaTestFramework.scala:40)
at sbt.TestRunner.run(TestFramework.scala:53)
at sbt.TestRunner.runTest$1(TestFramework.scala:67)
at sbt.TestRunner.run(TestFramework.scala:76)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11.runTest$2(TestFramework.scala:194)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
at sbt.NamedTestTask.run(TestFramework.scala:92)
at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
at sbt.TaskManager$Task.invoke(TaskManager.scala:62)
at sbt.impl.RunTask.doRun$1(RunTask.scala:77)
at sbt.impl.RunTask.runTask(RunTask.scala:85)
at sbt.impl.RunTask.sbt$impl$RunTask$$runIfNotRoot(RunTask.scala:60)
at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
at sbt.Control$.trapUnit(Control.scala:19)
at sbt.Distributor$Run$Worker.run(ParallelRunner.scala:131)
[2012-08-31 05:23:02,462] INFO group1_consumer1 group1_consumer1-0 successfully owned partition 400-0 for topic topic1 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,467] INFO group1_consumer1 Updating the cache (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,472] INFO group1_consumer1 Consumer group1_consumer1 selected partitions : topic1:400-0: fetched offset = -1: consumed offset = -1 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,476] INFO group1_consumer1 end rebalancing consumer group1_consumer1 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,489] INFO group1_consumer2 Connecting to zookeeper instance at 127.0.0.1:2182 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,514] INFO group1_consumer2 starting auto committer every 1000 ms (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,518] INFO group1_consumer2 begin registering consumer group1_consumer2 in ZK (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,530] INFO group1_consumer2 end registering consumer group1_consumer2 in ZK (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,533] INFO group1_consumer1 begin rebalancing consumer group1_consumer1 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,536] INFO group1_consumer2 starting watcher executor thread for consumer group1_consumer2 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,542] INFO group1_consumer2 begin rebalancing consumer group1_consumer2 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,596] INFO group1_consumer1 Releasing partition ownership (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,604] INFO group1_consumer1 Consumer group1_consumer1 rebalancing the following partitions: List(400-0) for topic topic1 with consumers: List(group1_consumer1-0, group1_consumer2-0) (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,606] INFO group1_consumer2 Releasing partition ownership (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,607] INFO group1_consumer1 group1_consumer1-0 attempting to claim partition 400-0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,611] INFO group1_consumer2 Consumer group1_consumer2 rebalancing the following partitions: List(400-0) for topic topic1 with consumers: List(group1_consumer1-0, group1_consumer2-0) (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,618] WARN group1_consumer2 No broker partitions consumed by consumer thread group1_consumer2-0 for topic topic1 (kafka.consumer.ZookeeperConsumerConnector:73)
[2012-08-31 05:23:02,623] INFO group1_consumer2 Updating the cache (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,627] INFO group1_consumer2 Consumer group1_consumer2 selected partitions : (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,631] INFO group1_consumer2 end rebalancing consumer group1_consumer2 try #0 (kafka.consumer.ZookeeperConsumerConnector:61)
[2012-08-31 05:23:02,852] WARN Exception causing close of session 0x1397c33946d0006 due to java.nio.channels.CancelledKeyException (org.apache.zookeeper.server.NIOServerCnxn:623)
[2012-08-31 05:23:02,852] ERROR Unexpected Exception: (org.apache.zookeeper.server.NIOServerCnxn:445)
java.nio.channels.CancelledKeyException
at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
at sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:77)
at org.apache.zookeeper.server.NIOServerCnxn.sendBuffer(NIOServerCnxn.java:418)
at org.apache.zookeeper.server.NIOServerCnxn.sendResponse(NIOServerCnxn.java:1509)
at org.apache.zookeeper.server.FinalRequestProcessor.processRequest(FinalRequestProcessor.java:367)
at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:135)
[0m[[31merror[0m] [0mTest Failed: testLoadBalance(kafka.zk.ZKLoadBalanceTest)[0m
junit.framework.AssertionFailedError: expected:<1> but was:<0>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:277)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:195)
at junit.framework.Assert.assertEquals(Assert.java:201)
at kafka.zk.ZKLoadBalanceTest.checkSetEqual(ZKLoadBalanceTest.scala:121)
at kafka.zk.ZKLoadBalanceTest.testLoadBalance(ZKLoadBalanceTest.scala:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at junit.framework.TestCase.runTest(TestCase.java:164)
at junit.framework.TestCase.runBare(TestCase.java:130)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:120)
at junit.framework.TestSuite.runTest(TestSuite.java:228)
at junit.framework.TestSuite.run(TestSuite.java:223)
at junit.framework.TestSuite.runTest(TestSuite.java:228)
at junit.framework.TestSuite.run(TestSuite.java:223)
at org.scalatest.junit.JUnit3Suite.run(JUnit3Suite.scala:309)
at org.scalatest.tools.ScalaTestFramework$ScalaTestRunner.run(ScalaTestFramework.scala:40)
at sbt.TestRunner.run(TestFramework.scala:53)
at sbt.TestRunner.runTest$1(TestFramework.scala:67)
at sbt.TestRunner.run(TestFramework.scala:76)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11.runTest$2(TestFramework.scala:194)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
at sbt.TestFramework$$anonfun$10$$anonfun$apply$11$$anonfun$apply$12.apply(TestFramework.scala:205)
at sbt.NamedTestTask.run(TestFramework.scala:92)
at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
at sbt.ScalaProject$$anonfun$sbt$ScalaProject$$toTask$1.apply(ScalaProject.scala:193)
at sbt.TaskManager$Task.invoke(TaskManager.scala:62)
at sbt.impl.RunTask.doRun$1(RunTask.scala:77)
at sbt.impl.RunTask.runTask(RunTask.scala:85)
at sbt.impl.RunTask.sbt$impl$RunTask$$runIfNotRoot(RunTask.scala:60)
at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
at sbt.impl.RunTask$$anonfun$runTasksExceptRoot$2.apply(RunTask.scala:48)
at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
at sbt.Distributor$Run$Worker$$anonfun$2.apply(ParallelRunner.scala:131)
at sbt.Control$.trapUnit(Control.scala:19)
at sbt.Distributor$Run$Worker.run(ParallelRunner.scala:131)
[0m[[0minfo[0m] [34m== core-kafka / kafka.zk.ZKLoadBalanceTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [0mTest Starting: testKafkaLog4jConfigs[0m
[2012-08-31 05:23:02,913] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-08-31 05:23:02,918] INFO starting log cleaner every 600000 ms (kafka.log.LogManager:61)
[2012-08-31 05:23:02,922] INFO connecting to ZK: 127.0.0.1:2182 (kafka.server.KafkaZooKeeper:61)
[2012-08-31 05:23:03,024] INFO Awaiting connections on port 50863 (kafka.network.Acceptor:130)
[2012-08-31 05:23:03,028] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-08-31 05:23:03,031] INFO Registering broker /brokers/ids/0 (kafka.server.KafkaZooKeeper:61)
[2012-08-31 05:23:03,078] INFO Registering broker /brokers/ids/0 succeeded with id:0,creatorId:127.0.1.1-1346408583035,host:127.0.1.1,port:50863 (kafka.server.KafkaZooKeeper:61)
[2012-08-31 05:23:03,082] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-31 05:23:03,086] INFO Kafka server started. (kafka.server.KafkaServer:61)
[2012-08-31 05:23:03,091] INFO Starting Kafka server... (kafka.server.KafkaServer:61)
[2012-08-31 05:23:03,096] INFO starting log cleaner every 300000 ms (kafka.log.LogManager:61)
[2012-08-31 05:23:03,142] INFO Awaiting connections on port 33400 (kafka.network.Acceptor:130)
[2012-08-31 05:23:03,146] INFO Will not load MX4J, mx4j-tools.jar is not in the classpath (kafka.utils.Mx4jLoader$:61)
[2012-08-31 05:23:03,150] INFO Starting log flusher every 3000 ms with the following overrides Map() (kafka.log.LogManager:61)
[2012-08-31 05:23:03,154] INFO Kafka server started. (kafka.server.KafkaServer:61)
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
log4j:WARN No appenders could be found for logger (kafka.producer.ProducerPool).
log4j:WARN Please initialize the log4j system properly.
[0m[[0minfo[0m] [0mTest Passed: testKafkaLog4jConfigs[0m
[0m[[0minfo[0m] [0mTest Starting: testBrokerListLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testBrokerListLog4jAppends[0m
[0m[[0minfo[0m] [0mTest Starting: testZkConnectLog4jAppends[0m
log4j:WARN Using default encoder - kafka.serializer.StringEncoder
[0m[[0minfo[0m] [0mTest Passed: testZkConnectLog4jAppends[0m
[0m[[0minfo[0m] [34m== core-kafka / kafka.log4j.KafkaLog4jAppenderTest ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[31merror[0m] [0mFailed: : Total 140, Failed 2, Errors 0, Passed 138, Skipped 0[0m
[0m[[0minfo[0m] [34m== core-kafka / test-finish ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [0mDeleting directory /tmp/sbt_3154007d[0m
[0m[[0minfo[0m] [34m== core-kafka / Test cleanup 1 ==[0m
[0m[[0minfo[0m] [34m[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[0minfo[0m] [34m== core-kafka / test-cleanup ==[0m
[0m[[31merror[0m] [0mError running kafka.message.CompressionUtilTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running kafka.zk.ZKLoadBalanceTest: Test FAILED[0m
[0m[[31merror[0m] [0mError running test: One or more subtasks failed[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal time: 1034 s, completed Aug 31, 2012 5:23:08 AM[0m
[0m[[0minfo[0m] [0m[0m
[0m[[0minfo[0m] [0mTotal session time: 1036 s, completed Aug 31, 2012 5:23:08 AM[0m
[0m[[31merror[0m] [0mError during build.[0m
Build step 'Execute shell' marked build as failure