You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2017/01/11 23:38:52 UTC

Build failed in Jenkins: kafka-trunk-jdk7 #1823

See <https://builds.apache.org/job/kafka-trunk-jdk7/1823/changes>

Changes:

[ismael] KAFKA-4507; Clients should support older brokers (KIP-97)

[wangguoz] KAFKA-3715: Add granular metrics to Kafka Streams and add hierarhical

------------------------------------------
[...truncated 15590 lines...]

org.apache.kafka.common.security.JaasUtilsTest > testControlFlag PASSED
:clients:determineCommitId UP-TO-DATE
:clients:createVersionFile
:clients:jar UP-TO-DATE
:core:compileJava UP-TO-DATE
:core:compileScala
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/api/OffsetCommitRequest.scala>:79: value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see corresponding Javadoc for more information.
            org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
                                                                 ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala>:36: value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see corresponding Javadoc for more information.
                             commitTimestamp: Long = org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,
                                                                                                          ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala>:37: value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see corresponding Javadoc for more information.
                             expireTimestamp: Long = org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {
                                                                                                          ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala>:492: value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see corresponding Javadoc for more information.
              if (offsetAndMetadata.expireTimestamp == org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)
                                                                                                            ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:319: value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see corresponding Javadoc for more information.
              if (partitionData.timestamp == OffsetCommitRequest.DEFAULT_TIMESTAMP)
                                                                 ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/client/ClientUtils.scala>:93: class ProducerConfig in package producer is deprecated: This class has been deprecated and will be removed in a future release. Please use org.apache.kafka.clients.producer.ProducerConfig instead.
    val producerConfig = new ProducerConfig(props)
                             ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/client/ClientUtils.scala>:94: method fetchTopicMetadata in object ClientUtils is deprecated: This method has been deprecated and will be removed in a future release.
    fetchTopicMetadata(topics, brokers, producerConfig, correlationId)
    ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/metrics/KafkaMetricsGroup.scala>:187: object ProducerRequestStatsRegistry in package producer is deprecated: This object has been deprecated and will be removed in a future release.
    ProducerRequestStatsRegistry.removeProducerRequestStats(clientId)
    ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/network/BlockingChannel.scala>:129: method readFromReadableChannel in class NetworkReceive is deprecated: see corresponding Javadoc for more information.
      response.readFromReadableChannel(channel)
               ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:319: value timestamp in class PartitionData is deprecated: see corresponding Javadoc for more information.
              if (partitionData.timestamp == OffsetCommitRequest.DEFAULT_TIMESTAMP)
                                ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:322: value timestamp in class PartitionData is deprecated: see corresponding Javadoc for more information.
                offsetRetention + partitionData.timestamp
                                                ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:576: method offsetData in class ListOffsetRequest is deprecated: see corresponding Javadoc for more information.
    val (authorizedRequestInfo, unauthorizedRequestInfo) = offsetRequest.offsetData.asScala.partition {
                                                                         ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:576: class PartitionData in object ListOffsetRequest is deprecated: see corresponding Javadoc for more information.
    val (authorizedRequestInfo, unauthorizedRequestInfo) = offsetRequest.offsetData.asScala.partition {
                                                                                                      ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:581: constructor PartitionData in class PartitionData is deprecated: see corresponding Javadoc for more information.
      new ListOffsetResponse.PartitionData(Errors.UNKNOWN_TOPIC_OR_PARTITION.code, List[JLong]().asJava)
      ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:606: constructor PartitionData in class PartitionData is deprecated: see corresponding Javadoc for more information.
        (topicPartition, new ListOffsetResponse.PartitionData(Errors.NONE.code, offsets.map(new JLong(_)).asJava))
                         ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:613: constructor PartitionData in class PartitionData is deprecated: see corresponding Javadoc for more information.
          (topicPartition, new ListOffsetResponse.PartitionData(Errors.forException(e).code, List[JLong]().asJava))
                           ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/KafkaApis.scala>:616: constructor PartitionData in class PartitionData is deprecated: see corresponding Javadoc for more information.
          (topicPartition, new ListOffsetResponse.PartitionData(Errors.forException(e).code, List[JLong]().asJava))
                           ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala>:267: class PartitionData in object ListOffsetRequest is deprecated: see corresponding Javadoc for more information.
        val partitions = Map(topicPartition -> new ListOffsetRequest.PartitionData(earliestOrLatest, 1))
                                                                     ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala>:280: value offsets in class PartitionData is deprecated: see corresponding Javadoc for more information.
          partitionData.offsets.get(0)
                        ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/tools/ConsoleProducer.scala>:45: class OldProducer in package producer is deprecated: This class has been deprecated and will be removed in a future release. Please use org.apache.kafka.clients.producer.KafkaProducer instead.
            new OldProducer(getOldProducerProps(config))
                ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/main/scala/kafka/tools/ConsoleProducer.scala>:47: class NewShinyProducer in package producer is deprecated: This class has been deprecated and will be removed in a future release. Please use org.apache.kafka.clients.producer.KafkaProducer instead.
            new NewShinyProducer(getNewProducerProps(config))
                ^
21 warnings found
:core:processResources UP-TO-DATE
:core:classes
:core:copyDependantLibs
:core:jar
:examples:compileJava
:examples:processResources UP-TO-DATE
:examples:classes
:examples:checkstyleMain
:examples:compileTestJava UP-TO-DATE
:examples:processTestResources UP-TO-DATE
:examples:testClasses UP-TO-DATE
:examples:checkstyleTest UP-TO-DATE
:examples:test UP-TO-DATE
:log4j-appender:compileJava
:log4j-appender:processResources UP-TO-DATE
:log4j-appender:classes
:log4j-appender:checkstyleMain
:log4j-appender:compileTestJava
:log4j-appender:processTestResources UP-TO-DATE
:log4j-appender:testClasses
:log4j-appender:checkstyleTest
:log4j-appender:test

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testLog4jAppends PASSED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs STARTED

org.apache.kafka.log4jappender.KafkaLog4jAppenderTest > testKafkaLog4jConfigs PASSED
:core:compileTestJava UP-TO-DATE
:core:compileTestScala
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/core/src/test/scala/unit/kafka/metrics/MetricsTest.scala>:88: method createAndShutdownStep in class MetricsTest is deprecated: This test has been deprecated and it will be removed in a future release
    createAndShutdownStep("group0", "consumer0", "producer0")
    ^
one warning found
:core:processTestResources UP-TO-DATE
:core:testClasses
:connect:api:compileJava
:connect:api:processResources UP-TO-DATE
:connect:api:classes
:connect:api:copyDependantLibs
:connect:api:jar
:connect:json:compileJava
:connect:json:processResources UP-TO-DATE
:connect:json:classes
:connect:json:copyDependantLibs
:connect:json:jar
:streams:compileJava
Download https://repo1.maven.org/maven2/org/rocksdb/rocksdbjni/5.0.1/rocksdbjni-5.0.1.pom
Download https://repo1.maven.org/maven2/org/rocksdb/rocksdbjni/5.0.1/rocksdbjni-5.0.1.jar
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java>:256: warning: [deprecation] remove(WriteOptions,byte[]) in RocksDB has been deprecated
                db.remove(wOptions, rawKey);
                  ^
<https://builds.apache.org/job/kafka-trunk-jdk7/ws/streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBStore.java>:277: warning: [deprecation] remove(byte[]) in RocksDB has been deprecated
                    db.remove(rawKey);
                      ^
Note: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.
2 warnings
:streams:processResources UP-TO-DATE
:streams:classes
:streams:checkstyleMain
:streams:compileTestJavaNote: Some input files use unchecked or unsafe operations.
Note: Recompile with -Xlint:unchecked for details.

:streams:processTestResources
:streams:testClasses
:streams:checkstyleTest
:streams:test

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[0] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[0] PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[1] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[1] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[0] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[0] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[1] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[1] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[2] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[2] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[3] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[3] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[4] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[4] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[5] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[5] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[6] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[6] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[7] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[7] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[8] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[8] PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithIntermediateUserTopic FAILED
    java.lang.AssertionError: Condition not met within timeout 30000. Did not receive 10 number of records
        at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
        at org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:213)
        at org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:182)
        at org.apache.kafka.streams.integration.ResetIntegrationTest.testReprocessingFromScratchAfterResetWithIntermediateUserTopic(ResetIntegrationTest.java:160)

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
ERROR: Could not install GRADLE_2_4_RC_2_HOME
java.lang.NullPointerException
Build timed out (after 240 minutes). Marking the build as failed.
Build was aborted
Recording test results
Setting GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2

Jenkins build is back to normal : kafka-trunk-jdk7 #1825

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/kafka-trunk-jdk7/1825/changes>


Build failed in Jenkins: kafka-trunk-jdk7 #1824

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/kafka-trunk-jdk7/1824/>

------------------------------------------
[...truncated 18225 lines...]

org.apache.kafka.streams.KafkaStreamsTest > testStartAndClose PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCloseIsIdempotent PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotCleanupWhileRunning PASSED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice STARTED

org.apache.kafka.streams.KafkaStreamsTest > testCannotStartTwice PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > shouldCountClicksPerRegion[0] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > shouldCountClicksPerRegion[0] PASSED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > shouldCountClicksPerRegion[1] STARTED

org.apache.kafka.streams.integration.KStreamKTableJoinIntegrationTest > shouldCountClicksPerRegion[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldBeAbleToQueryState[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldBeAbleToQueryState[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldNotMakeStoreAvailableUntilAllStoresAvailable[0] FAILED
    java.lang.AssertionError: Condition not met within timeout 30000. waiting for store count-by-key
        at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
        at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:501)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > queryOnRebalance[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > queryOnRebalance[0] FAILED
    java.lang.AssertionError: Condition not met within timeout 60000. Did not receive 1 number of records
        at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
        at org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinValuesRecordsReceived(IntegrationTestUtils.java:253)
        at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.waitUntilAtLeastNumRecordProcessed(QueryableStateIntegrationTest.java:668)
        at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.queryOnRebalance(QueryableStateIntegrationTest.java:349)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > concurrentAccesses[0] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > concurrentAccesses[0] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldBeAbleToQueryState[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldBeAbleToQueryState[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > shouldNotMakeStoreAvailableUntilAllStoresAvailable[1] FAILED
    java.lang.AssertionError: Condition not met within timeout 30000. waiting for store count-by-key
        at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:259)
        at org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldNotMakeStoreAvailableUntilAllStoresAvailable(QueryableStateIntegrationTest.java:501)

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > queryOnRebalance[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > queryOnRebalance[1] PASSED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > concurrentAccesses[1] STARTED

org.apache.kafka.streams.integration.QueryableStateIntegrationTest > concurrentAccesses[1] PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testShouldReadFromRegexAndNamedTopics STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testShouldReadFromRegexAndNamedTopics PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testRegexMatchesTopicsAWhenCreated STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testRegexMatchesTopicsAWhenCreated PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testMultipleConsumersCanReadFromPartitionedTopic STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testMultipleConsumersCanReadFromPartitionedTopic PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testRegexMatchesTopicsAWhenDeleted STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testRegexMatchesTopicsAWhenDeleted PASSED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testNoMessagesSentExceptionFromOverlappingPatterns STARTED

org.apache.kafka.streams.integration.RegexSourceIntegrationTest > testNoMessagesSentExceptionFromOverlappingPatterns PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[0] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[0] PASSED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[1] STARTED

org.apache.kafka.streams.integration.KStreamRepartitionJoinTest > shouldCorrectlyRepartitionOnJoinOperations[1] PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > shouldCompactTopicsForStateChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > shouldCompactTopicsForStateChangelogs PASSED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > shouldUseCompactAndDeleteForWindowStoreChangelogs STARTED

org.apache.kafka.streams.integration.InternalTopicIntegrationTest > shouldUseCompactAndDeleteForWindowStoreChangelogs PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregate STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregate PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCount STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCount PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCountSessionWindows STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldCountSessionWindows PASSED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregateWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > shouldAggregateWindowed PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic STARTED

org.apache.kafka.streams.integration.ResetIntegrationTest > testReprocessingFromScratchAfterResetWithoutIntermediateUserTopic PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[0] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[0] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[1] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[1] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[2] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[2] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[3] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[3] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[4] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[4] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[5] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[5] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[6] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[6] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[7] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[7] PASSED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[8] STARTED

org.apache.kafka.streams.integration.KTableKTableJoinIntegrationTest > KTableKTableJoin[8] PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduce STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduce PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldGroupByKey STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldGroupByKey PASSED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduceWindowed STARTED

org.apache.kafka.streams.integration.KStreamAggregationDedupIntegrationTest > shouldReduceWindowed PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > shouldFanoutTheInput[0] STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > shouldFanoutTheInput[0] PASSED

org.apache.kafka.streams.integration.FanoutIntegrationTest > shouldFanoutTheInput[1] STARTED

org.apache.kafka.streams.integration.FanoutIntegrationTest > shouldFanoutTheInput[1] PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testInnerKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testInnerKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKStreamKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testLeftKStreamKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testOuterKTableKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testOuterKTableKTable PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testInnerKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testInnerKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testOuterKStreamKStream STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testOuterKStreamKStream PASSED

org.apache.kafka.streams.integration.JoinIntegrationTest > testInnerKStreamKTable STARTED

org.apache.kafka.streams.integration.JoinIntegrationTest > testInnerKStreamKTable PASSED

678 tests completed, 3 failed
:streams:test FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':streams:test'.
> There were failing tests. See the report at: file://<https://builds.apache.org/job/kafka-trunk-jdk7/ws/streams/build/reports/tests/test/index.html>

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.

BUILD FAILED

Total time: 1 hrs 43 mins 16.104 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2