You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@geode.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/09/23 16:12:06 UTC

Build failed in Jenkins: Geode-spark-connector #78

See <https://builds.apache.org/job/Geode-spark-connector/78/changes>

Changes:

[hkhamesra] GEODE-37 In spark connector we call TcpClient static method to get the

[klund] GEODE-1906: fix misspelling of Successfully

[upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators with gateways

------------------------------------------
[...truncated 1883 lines...]
16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file server' on port 40135.
16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on port 4041.
16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
16/09/23 16:11:15 INFO Executor: Starting executor ID <driver> on host localhost
16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager localhost:41182 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 41182)
16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
[info] RetrieveRegionIntegrationTest:
......

=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 27662.
=== GeodeRunner: starting locator on port 23825
=== GeodeRunner: waiting for locator on port 23825
....=== GeodeRunner: done waiting for locator on port 23825
=== GeodeRunner: starting server1 with clientPort 28993
=== GeodeRunner: starting server2 with clientPort 26318
=== GeodeRunner: starting server3 with clientPort 29777
=== GeodeRunner: starting server4 with clientPort 22946
....
............................................Locator in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator on hemera.apache.org[23825] as locator is currently online.
Process ID: 1860
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.jmx-manager-http-port=29684 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

................
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4 on hemera.apache.org[22946] as server4 is currently online.
Process ID: 2204
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4/server4.log
JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

..
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1 on hemera.apache.org[28993] as server1 is currently online.
Process ID: 2199
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar



Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2 on hemera.apache.org[26318] as server2 is currently online.
Process ID: 2153
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3 on hemera.apache.org[29777] as server3 is currently online.
Process ID: 2175
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3/server3.log
JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(28993, 26318, 29777, 22946).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/23 16:11:43 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a suite with class name: ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest *** ABORTED ***
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
[info] org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
[info]   ...
[info] BasicIntegrationTest:
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
=== GeodeRunner: stop server 3.
=== GeodeRunner: stop server 4.
............



=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 23825.
=== GeodeRunner: starting locator on port 23573
=== GeodeRunner: waiting for locator on port 23573
....=== GeodeRunner: done waiting for locator on port 23573
=== GeodeRunner: starting server1 with clientPort 27897
=== GeodeRunner: starting server2 with clientPort 20289
....
....................Locator in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator on hemera.apache.org[23573] as locator is currently online.
Process ID: 3273
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.jmx-manager-http-port=23053 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

........
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2 on hemera.apache.org[20289] as server2 is currently online.
Process ID: 3465
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1 on hemera.apache.org[27897] as server1 is currently online.
Process ID: 3505
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(27897, 20289).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/23 16:12:09 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a suite with class name: ittest.org.apache.geode.spark.connector.BasicIntegrationTest *** ABORTED ***
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
[info] org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at ittest.org.apache.geode.spark.connector.BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58)
[info]   ...
[info] ScalaTest
[info] Run completed in 1 minute, 59 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 1, aborted 3
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] *** 3 SUITES ABORTED ***
[error] Error: Total 3, Failed 0, Errors 3, Passed 0
[error] Error during tests:
[error] 	ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
[error] 	ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
[error] 	ittest.org.apache.geode.spark.connector.BasicIntegrationTest
[error] (geode-spark-connector/it:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 128 s, completed Sep 23, 2016 4:12:09 PM
Build step 'Execute shell' marked build as failure
Recording test results
Skipped archiving because build is not successful

Jenkins build is back to normal : Geode-spark-connector #81

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Geode-spark-connector/81/changes>


Build failed in Jenkins: Geode-spark-connector #80

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Geode-spark-connector/80/>

------------------------------------------
[...truncated 1883 lines...]
16/09/26 15:21:01 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7552e30c-bb8f-45c7-be85-d259f3b7fd0c/httpd-6a47a70a-a5c7-4753-a07d-ed3f38c82625
16/09/26 15:21:01 INFO HttpServer: Starting HTTP Server
16/09/26 15:21:01 INFO Utils: Successfully started service 'HTTP file server' on port 36213.
16/09/26 15:21:01 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/26 15:21:06 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/09/26 15:21:11 INFO Utils: Successfully started service 'SparkUI' on port 4041.
16/09/26 15:21:11 INFO SparkUI: Started SparkUI at http://localhost:4041
16/09/26 15:21:11 INFO Executor: Starting executor ID <driver> on host localhost
16/09/26 15:21:11 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:46093/user/HeartbeatReceiver
16/09/26 15:21:11 INFO NettyBlockTransferService: Server created on 46091
16/09/26 15:21:11 INFO BlockManagerMaster: Trying to register BlockManager
16/09/26 15:21:11 INFO BlockManagerMasterActor: Registering block manager localhost:46091 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 46091)
16/09/26 15:21:11 INFO BlockManagerMaster: Registered BlockManager
[info] RetrieveRegionIntegrationTest:
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
......

=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 20287.
=== GeodeRunner: starting locator on port 24079
=== GeodeRunner: waiting for locator on port 24079
....=== GeodeRunner: done waiting for locator on port 24079
=== GeodeRunner: starting server1 with clientPort 29698
=== GeodeRunner: starting server2 with clientPort 21056
=== GeodeRunner: starting server3 with clientPort 29845
=== GeodeRunner: starting server4 with clientPort 26222
...
............................................Locator in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator on hemera.apache.org[24079] as locator is currently online.
Process ID: 30902
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.jmx-manager-http-port=24854 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

....................
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4 on hemera.apache.org[26222] as server4 is currently online.
Process ID: 31234
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4/server4.log
JVM Arguments: -Dgemfire.locators=localhost[24079] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1 on hemera.apache.org[29698] as server1 is currently online.
Process ID: 31267
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[24079] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2 on hemera.apache.org[21056] as server2 is currently online.
Process ID: 31302
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[24079] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3 on hemera.apache.org[29845] as server3 is currently online.
Process ID: 31358
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3/server3.log
JVM Arguments: -Dgemfire.locators=localhost[24079] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(29698, 21056, 29845, 26222).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/26 15:21:38 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a suite with class name: ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest *** ABORTED ***
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
[info] org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
[info]   ...
[info] BasicIntegrationTest:
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
=== GeodeRunner: stop server 3.
=== GeodeRunner: stop server 4.
............



=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 24079.
=== GeodeRunner: starting locator on port 21153
=== GeodeRunner: waiting for locator on port 21153
....=== GeodeRunner: done waiting for locator on port 21153
=== GeodeRunner: starting server1 with clientPort 23625
=== GeodeRunner: starting server2 with clientPort 21090
...
..................Locator in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator on hemera.apache.org[21153] as locator is currently online.
Process ID: 32554
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.jmx-manager-http-port=28144 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

........
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2 on hemera.apache.org[21090] as server2 is currently online.
Process ID: 303
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[21153] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1 on hemera.apache.org[23625] as server1 is currently online.
Process ID: 300
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[21153] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(23625, 21090).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/26 15:22:04 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a suite with class name: ittest.org.apache.geode.spark.connector.BasicIntegrationTest *** ABORTED ***
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
[info] org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at ittest.org.apache.geode.spark.connector.BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58)
[info]   ...
[info] ScalaTest
[info] Run completed in 1 minute, 58 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 1, aborted 3
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] *** 3 SUITES ABORTED ***
[error] Error: Total 3, Failed 0, Errors 3, Passed 0
[error] Error during tests:
[error] 	ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
[error] 	ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
[error] 	ittest.org.apache.geode.spark.connector.BasicIntegrationTest
[error] (geode-spark-connector/it:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 128 s, completed Sep 26, 2016 3:22:05 PM
Build step 'Execute shell' marked build as failure
Recording test results
Skipped archiving because build is not successful

Build failed in Jenkins: Geode-spark-connector #79

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Geode-spark-connector/79/changes>

Changes:

[gzhou] GEODE-1894: there's a race that AckReader thred is reading for ack

------------------------------------------
[...truncated 1884 lines...]
16/09/24 15:56:33 INFO HttpFileServer: HTTP File server directory is /tmp/spark-818af6ab-9026-44de-a5ea-103aa3a0b9ed/httpd-4eb1d544-0563-49c9-85b6-38809df75dc3
16/09/24 15:56:33 INFO HttpServer: Starting HTTP Server
16/09/24 15:56:33 INFO Utils: Successfully started service 'HTTP file server' on port 53100.
16/09/24 15:56:33 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/24 15:56:38 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16/09/24 15:56:43 INFO Utils: Successfully started service 'SparkUI' on port 4041.
16/09/24 15:56:43 INFO SparkUI: Started SparkUI at http://localhost:4041
16/09/24 15:56:43 INFO Executor: Starting executor ID <driver> on host localhost
16/09/24 15:56:43 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:47343/user/HeartbeatReceiver
16/09/24 15:56:43 INFO NettyBlockTransferService: Server created on 36030
16/09/24 15:56:43 INFO BlockManagerMaster: Trying to register BlockManager
16/09/24 15:56:43 INFO BlockManagerMasterActor: Registering block manager localhost:36030 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 36030)
16/09/24 15:56:43 INFO BlockManagerMaster: Registered BlockManager
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
[info] RetrieveRegionIntegrationTest:
......

=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 26558.
=== GeodeRunner: starting locator on port 21281
=== GeodeRunner: waiting for locator on port 21281
....=== GeodeRunner: done waiting for locator on port 21281
=== GeodeRunner: starting server1 with clientPort 21702
=== GeodeRunner: starting server2 with clientPort 21557
=== GeodeRunner: starting server3 with clientPort 20123
=== GeodeRunner: starting server4 with clientPort 22028
...
........................................Locator in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator on hemera.apache.org[21281] as locator is currently online.
Process ID: 9610
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.jmx-manager-http-port=22484 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

.....................
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2 on hemera.apache.org[21557] as server2 is currently online.
Process ID: 9932
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[21281] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar



Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1 on hemera.apache.org[21702] as server1 is currently online.
Process ID: 9972
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[21281] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3 on hemera.apache.org[20123] as server3 is currently online.
Process ID: 10028
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3/server3.log
JVM Arguments: -Dgemfire.locators=localhost[21281] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4 on hemera.apache.org[22028] as server4 is currently online.
Process ID: 9906
Uptime: 9 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4/server4.log
JVM Arguments: -Dgemfire.locators=localhost[21281] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(21702, 21557, 20123, 22028).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/24 15:57:10 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a suite with class name: ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest *** ABORTED ***
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
[info] org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
[info]   ...
[info] BasicIntegrationTest:
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
=== GeodeRunner: stop server 3.
=== GeodeRunner: stop server 4.
............



=== GeodeRunner: stop locator
....
Successfully stop Geode locator at port 21281.
=== GeodeRunner: starting locator on port 20562
=== GeodeRunner: waiting for locator on port 20562
....=== GeodeRunner: done waiting for locator on port 20562
=== GeodeRunner: starting server1 with clientPort 26128
=== GeodeRunner: starting server2 with clientPort 26984
...
....................Locator in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator on hemera.apache.org[20562] as locator is currently online.
Process ID: 11019
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true -Dgemfire.load-cluster-configuration-from-dir=false -Dgemfire.jmx-manager-http-port=27578 -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.

......
Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2 on hemera.apache.org[26984] as server2 is currently online.
Process ID: 11222
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2/server2.log
JVM Arguments: -Dgemfire.locators=localhost[20562] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar


Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1 on hemera.apache.org[26128] as server1 is currently online.
Process ID: 11246
Uptime: 7 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1/server1.log
JVM Arguments: -Dgemfire.locators=localhost[20562] -Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-regions.xml -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

All WrappedArray(26128, 26984).length servers have been started
Deploying:geode-functions_2.10-0.5.0.jar
16/09/24 15:57:36 WARN SparkContext: Another SparkContext is being constructed (or threw an exception in its constructor).  This may indicate an error, since only one SparkContext may be running in this JVM (see SPARK-2243). The other SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.setUpBeforeClass(JavaApiIntegrationTest.java:75)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:497)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
org.junit.runners.Suite.runChild(Suite.java:127)
org.junit.runners.Suite.runChild(Suite.java:26)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
org.junit.runners.ParentRunner.run(ParentRunner.java:309)
[info] Exception encountered when attempting to run a suite with class name: ittest.org.apache.geode.spark.connector.BasicIntegrationTest *** ABORTED ***
[info]   org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running SparkContext was created at:
[info] org.apache.spark.SparkContext.<init>(SparkContext.scala:80)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
[info] org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.scala:187)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
[info] ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
[info] org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:462)
[info] org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:294)
[info] sbt.ForkMain$Run$2.call(ForkMain.java:284)
[info] java.util.concurrent.FutureTask.run(FutureTask.java:266)
[info] java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[info] java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[info] java.lang.Thread.run(Thread.java:745)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807)
[info]   at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794)
[info]   at scala.Option.foreach(Option.scala:236)
[info]   at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:1794)
[info]   at org.apache.spark.SparkContext$.markPartiallyConstructed(SparkContext.scala:1833)
[info]   at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
[info]   at ittest.org.apache.geode.spark.connector.BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58)
[info]   ...
[info] ScalaTest
[info] Run completed in 1 minute, 57 seconds.
[info] Total number of tests run: 0
[info] Suites: completed 1, aborted 3
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] *** 3 SUITES ABORTED ***
[error] Error: Total 3, Failed 0, Errors 3, Passed 0
[error] Error during tests:
[error] 	ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
[error] 	ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
[error] 	ittest.org.apache.geode.spark.connector.BasicIntegrationTest
[error] (geode-spark-connector/it:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 126 s, completed Sep 24, 2016 3:57:36 PM
Build step 'Execute shell' marked build as failure
Recording test results
Skipped archiving because build is not successful

Re: Build failed in Jenkins: Geode-spark-connector #78

Posted by Dan Smith <ds...@pivotal.io>.
I checked in a fix for the current dependencies on spring-core (thanks Kirk
and Udo). But we need to work on avoiding this issue in the future. Having
"optional" dependencies in the core seems like the main issue; a secondary
issue is that we don't have tests of geode-core that run with just the
non-optional geode-core dependencies available. Well, actually we do have
some tests in geode-examples, but apparently those aren't running as part
of precheckin! I filed GEODE-1937 for that.

-Dan

On Fri, Sep 23, 2016 at 8:15 PM, Kirk Lund <kl...@apache.org> wrote:

> org.apache.geode.internal.lang.StringUtils includes isEmpty(String)
>
> -Kirk
>
> On Friday, September 23, 2016, Udo Kohlmeyer <uk...@pivotal.io>
> wrote:
>
> > I can easily fix this.
> >
> > Sure we have a utility lying around in the core that can handle
> > "String.isEmpty"
> >
> > --Udo
> >
> >
> > On 24/09/2016 9:56 AM, Anthony Baker wrote:
> >
> >> Yep, I’m seeing failures on any client app that doesn’t explicitly
> >> include spring as dependency.
> >>
> >> Exception in thread "main" java.lang.NoClassDefFoundError:
> >> org/springframework/util/StringUtils
> >>         at org.apache.geode.internal.net.SSLConfigurationFactory.config
> >> ureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:274)
> >>         at org.apache.geode.internal.net.SSLConfigurationFactory.config
> >> ureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:270)
> >>         at org.apache.geode.internal.net.SSLConfigurationFactory.create
> >> SSLConfigForComponent(SSLConfigurationFactory.java:138)
> >>         at org.apache.geode.internal.net.SSLConfigurationFactory.getSSL
> >> ConfigForComponent(SSLConfigurationFactory.java:67)
> >>         at org.apache.geode.internal.net.SocketCreatorFactory.getSocket
> >> CreatorForComponent(SocketCreatorFactory.java:67)
> >>         at org.apache.geode.distributed.internal.tcpserver.TcpClient.<i
> >> nit>(TcpClient.java:69)
> >>         at org.apache.geode.cache.client.internal.AutoConnectionSourceI
> >> mpl.<init>(AutoConnectionSourceImpl.java:114)
> >>         at org.apache.geode.cache.client.internal.PoolImpl.getSourceImp
> >> l(PoolImpl.java:579)
> >>         at org.apache.geode.cache.client.internal.PoolImpl.<init>(PoolI
> >> mpl.java:219)
> >>         at org.apache.geode.cache.client.internal.PoolImpl.create(PoolI
> >> mpl.java:132)
> >>         at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolF
> >> actoryImpl.java:319)
> >>         at org.apache.geode.internal.cache.GemFireCacheImpl.determineDe
> >> faultPool(GemFireCacheImpl.java:2943)
> >>         at org.apache.geode.internal.cache.GemFireCacheImpl.initializeD
> >> eclarativeCache(GemFireCacheImpl.java:1293)
> >>         at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(
> >> GemFireCacheImpl.java:1124)
> >>         at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate
> >> (GemFireCacheImpl.java:765)
> >>         at org.apache.geode.internal.cache.GemFireCacheImpl.createClien
> >> t(GemFireCacheImpl.java:740)
> >>         at org.apache.geode.cache.client.ClientCacheFactory.basicCreate
> >> (ClientCacheFactory.java:235)
> >>         at org.apache.geode.cache.client.ClientCacheFactory.create(Clie
> >> ntCacheFactory.java:189)
> >>         at HelloWorld.main(HelloWorld.java:25)
> >> Caused by: java.lang.ClassNotFoundException:
> >> org.springframework.util.StringUtils
> >>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> >>         at sun.misc.Launcher$AppClassLoader.loadClass(
> Launcher.java:331)
> >>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> >>         ... 19 more
> >>
> >> Anthony
> >>
> >>
> >> On Sep 23, 2016, at 4:34 PM, Dan Smith <ds...@pivotal.io> wrote:
> >>>
> >>> I created GEODE-1934 for this. It looks like the problem is actually
> that
> >>> our dependencies for geode-core are messed up. spring-core is marked
> >>> optional, but we're using it in critical places like this
> >>> SSLConfigurationFactory.
> >>>
> >>> In my opinion we shouldn't depend on spring-core at all unless we're
> >>> actually going to use it for things other than StringUtils. I think
> we've
> >>> accidentally introduced dependencies on it because the gfsh code in the
> >>> core is pulling in a bunch of spring libraries.
> >>>
> >>> -Dan
> >>>
> >>>
> >>> On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
> >>> jenkins@builds.apache.org> wrote:
> >>>
> >>> See <https://builds.apache.org/job/Geode-spark-connector/78/changes>
> >>>>
> >>>> Changes:
> >>>>
> >>>> [hkhamesra] GEODE-37 In spark connector we call TcpClient static
> method
> >>>> to
> >>>> get the
> >>>>
> >>>> [klund] GEODE-1906: fix misspelling of Successfully
> >>>>
> >>>> [upthewaterspout] GEODE-1915: Prevent deadlock registering
> instantiators
> >>>> with gateways
> >>>>
> >>>> ------------------------------------------
> >>>> [...truncated 1883 lines...]
> >>>> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
> >>>> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
> >>>> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
> >>>> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
> >>>> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
> >>>> server' on port 40135.
> >>>> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
> >>>> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
> >>>> 4040. Attempting port 4041.
> >>>> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI'
> on
> >>>> port 4041.
> >>>> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at
> >>>> http://localhost:4041
> >>>> 16/09/23 16:11:15 INFO Executor: Starting executor ID <driver> on host
> >>>> localhost
> >>>> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
> >>>> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
> >>>> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on
> >>>> 41182
> >>>> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register
> >>>> BlockManager
> >>>> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block
> >>>> manager
> >>>> localhost:41182 with 2.8 GB RAM, BlockManagerId(<driver>, localhost,
> >>>> 41182)
> >>>> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
> >>>> === GeodeRunner: stop server 1.
> >>>> === GeodeRunner: stop server 2.
> >>>> [0m[ [0minfo [0m]  [0m [32mRetrieveRegionIntegrationTest: [0m [0m
> >>>> ......
> >>>>
> >>>> === GeodeRunner: stop locator
> >>>> ...
> >>>> Successfully stop Geode locator at port 27662.
> >>>> === GeodeRunner: starting locator on port 23825
> >>>> === GeodeRunner: waiting for locator on port 23825
> >>>> ....=== GeodeRunner: done waiting for locator on port 23825
> >>>> === GeodeRunner: starting server1 with clientPort 28993
> >>>> === GeodeRunner: starting server2 with clientPort 26318
> >>>> === GeodeRunner: starting server3 with clientPort 29777
> >>>> === GeodeRunner: starting server4 with clientPort 22946
> >>>> ....
> >>>> ............................................Locator in
> >>>> /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-
> >>>> connector/geode-spark-connector/target/testgeode/locator on
> >>>> hemera.apache.org[23825] as locator is currently online.
> >>>> Process ID: 1860
> >>>> Uptime: 4 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> locator/locator.log
> >>>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
> >>>> -Dgemfire.load-cluster-configuration-from-dir=false
> >>>> -Dgemfire.jmx-manager-http-port=29684 -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/
> >>>> lib/geode-dependencies.jar
> >>>>
> >>>> Successfully connected to: JMX Manager [host=hemera.apache.org,
> >>>> port=1099]
> >>>>
> >>>> Cluster configuration service is up and running.
> >>>>
> >>>> ................
> >>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/
> >>>> target/testgeode/server4
> >>>> on hemera.apache.org[22946] as server4 is currently online.
> >>>> Process ID: 2204
> >>>> Uptime: 8 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> server4/server4.log
> >>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
> >>>> -Dgemfire.use-cluster-configuration=true
> >>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> >>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> >>>> connector/geode-spark-connector/src/it/resources/test-
> >>>> retrieve-regions.xml
> >>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> >>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/./
> >>>> target/scala-2.10/
> >>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> >>>> spark-connector/geode-assembly/build/install/apache-
> >>>> geode/lib/geode-dependencies.jar
> >>>>
> >>>> ..
> >>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/
> >>>> target/testgeode/server1
> >>>> on hemera.apache.org[28993] as server1 is currently online.
> >>>> Process ID: 2199
> >>>> Uptime: 8 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> server1/server1.log
> >>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
> >>>> -Dgemfire.use-cluster-configuration=true
> >>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> >>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> >>>> connector/geode-spark-connector/src/it/resources/test-
> >>>> retrieve-regions.xml
> >>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> >>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/./
> >>>> target/scala-2.10/
> >>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> >>>> spark-connector/geode-assembly/build/install/apache-
> >>>> geode/lib/geode-dependencies.jar
> >>>>
> >>>>
> >>>>
> >>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/
> >>>> target/testgeode/server2
> >>>> on hemera.apache.org[26318] as server2 is currently online.
> >>>> Process ID: 2153
> >>>> Uptime: 9 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> server2/server2.log
> >>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
> >>>> -Dgemfire.use-cluster-configuration=true
> >>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> >>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> >>>> connector/geode-spark-connector/src/it/resources/test-
> >>>> retrieve-regions.xml
> >>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> >>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/./
> >>>> target/scala-2.10/
> >>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> >>>> spark-connector/geode-assembly/build/install/apache-
> >>>> geode/lib/geode-dependencies.jar
> >>>>
> >>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/
> >>>> target/testgeode/server3
> >>>> on hemera.apache.org[29777] as server3 is currently online.
> >>>> Process ID: 2175
> >>>> Uptime: 9 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> server3/server3.log
> >>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
> >>>> -Dgemfire.use-cluster-configuration=true
> >>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> >>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> >>>> connector/geode-spark-connector/src/it/resources/test-
> >>>> retrieve-regions.xml
> >>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> >>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/./
> >>>> target/scala-2.10/
> >>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> >>>> spark-connector/geode-assembly/build/install/apache-
> >>>> geode/lib/geode-dependencies.jar
> >>>>
> >>>> All WrappedArray(28993, 26318, 29777, 22946).length servers have been
> >>>> started
> >>>> Deploying:geode-functions_2.10-0.5.0.jar
> >>>> 16/09/23 16:11:43 WARN SparkContext: Another SparkContext is being
> >>>> constructed (or threw an exception in its constructor).  This may
> >>>> indicate
> >>>> an error, since only one SparkContext may be running in this JVM (see
> >>>> SPARK-2243). The other SparkContext was created at:
> >>>> org.apache.spark.api.java.JavaSparkContext.<init>(
> >>>> JavaSparkContext.scala:61)
> >>>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
> >>>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
> >>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> >>>> ssorImpl.java:
> >>>> 62)
> >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> >>>> DelegatingMethodAccessorImpl.java:43)
> >>>> java.lang.reflect.Method.invoke(Method.java:497)
> >>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
> >>>> FrameworkMethod.java:47)
> >>>> org.junit.internal.runners.model.ReflectiveCallable.run(
> >>>> ReflectiveCallable.java:12)
> >>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
> >>>> FrameworkMethod.java:44)
> >>>> org.junit.internal.runners.statements.RunBefores.
> >>>> evaluate(RunBefores.java:24)
> >>>> org.junit.internal.runners.statements.RunAfters.evaluate(
> >>>> RunAfters.java:27)
> >>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> >>>> org.junit.runners.Suite.runChild(Suite.java:127)
> >>>> org.junit.runners.Suite.runChild(Suite.java:26)
> >>>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> >>>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> >>>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> >>>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> >>>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> >>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> >>>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to
> run
> >>>> a
> >>>> suite with class name: ittest.org.apache.geode.spark.
> >>>> connector.RetrieveRegionIntegrationTest
> >>>> *** ABORTED *** [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only
> one
> >>>> SparkContext may be running in this JVM (see SPARK-2243). To ignore
> this
> >>>> error, set spark.driver.allowMultipleContexts = true. The currently
> >>>> running SparkContext was created at: [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkCont
> >>>> ext.<init>(SparkContext.scala:80)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> >>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
> >>>> ionTest.scala:50)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
> >>>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> >>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
> >>>> ionTest.scala:30)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAft
> >>>> erAll$class.run(BeforeAndAfterAll.scala:253)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> >>>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.
> scala:30)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
> >>>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framew
> >>>> ork$ScalaTestTask.execute(Framework.scala:671)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> >>>> FutureTask.run(FutureTask.java:266) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> >>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> >>>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m
> >>>> [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
> >>>> kContext.scala:1811)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
> >>>> kContext.scala:1807)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.
> scala:236)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m
> [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m
> [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.
> scala:236)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
> >>>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> >>>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.
> >>>> <init>(SparkContext.scala:89)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.
> >>>> connector.
> >>>> RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegr
> >>>> ationTest.scala:51)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [32mBasicIntegrationTest: [0m [0m
> >>>> === GeodeRunner: stop server 1.
> >>>> === GeodeRunner: stop server 2.
> >>>> === GeodeRunner: stop server 3.
> >>>> === GeodeRunner: stop server 4.
> >>>> ............
> >>>>
> >>>>
> >>>>
> >>>> === GeodeRunner: stop locator
> >>>> ...
> >>>> Successfully stop Geode locator at port 23825.
> >>>> === GeodeRunner: starting locator on port 23573
> >>>> === GeodeRunner: waiting for locator on port 23573
> >>>> ....=== GeodeRunner: done waiting for locator on port 23573
> >>>> === GeodeRunner: starting server1 with clientPort 27897
> >>>> === GeodeRunner: starting server2 with clientPort 20289
> >>>> ....
> >>>> ....................Locator in /x1/jenkins/jenkins-slave/
> >>>> workspace/Geode-spark-connector/geode-spark-connector/geode-spark-
> >>>> connector/target/testgeode/locator on hemera.apache.org[23573] as
> >>>> locator
> >>>> is currently online.
> >>>> Process ID: 3273
> >>>> Uptime: 4 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> locator/locator.log
> >>>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
> >>>> -Dgemfire.load-cluster-configuration-from-dir=false
> >>>> -Dgemfire.jmx-manager-http-port=23053 -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/
> >>>> lib/geode-dependencies.jar
> >>>>
> >>>> Successfully connected to: JMX Manager [host=hemera.apache.org,
> >>>> port=1099]
> >>>>
> >>>> Cluster configuration service is up and running.
> >>>>
> >>>> ........
> >>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/
> >>>> target/testgeode/server2
> >>>> on hemera.apache.org[20289] as server2 is currently online.
> >>>> Process ID: 3465
> >>>> Uptime: 7 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> server2/server2.log
> >>>> JVM Arguments: -Dgemfire.locators=localhost[23573]
> >>>> -Dgemfire.use-cluster-configuration=true
> >>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> >>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> >>>> connector/geode-spark-connector/src/it/resources/test-regions.xml
> >>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> >>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/./
> >>>> target/scala-2.10/
> >>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> >>>> spark-connector/geode-assembly/build/install/apache-
> >>>> geode/lib/geode-dependencies.jar
> >>>>
> >>>>
> >>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/
> >>>> target/testgeode/server1
> >>>> on hemera.apache.org[27897] as server1 is currently online.
> >>>> Process ID: 3505
> >>>> Uptime: 7 seconds
> >>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
> >>>> Java Version: 1.8.0_66
> >>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/
> >>>> server1/server1.log
> >>>> JVM Arguments: -Dgemfire.locators=localhost[23573]
> >>>> -Dgemfire.use-cluster-configuration=true
> >>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> >>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> >>>> connector/geode-spark-connector/src/it/resources/test-regions.xml
> >>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> >>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
> >>>> nalHandlers=true
> >>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> >>>> gcInterval=9223372036854775806
> >>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> >>>> connector/geode-assembly/build/install/apache-geode/lib/
> >>>> geode-core-1.0.0-
> >>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
> >>>> Geode-spark-
> >>>> connector/geode-spark-connector/geode-spark-connector/./
> >>>> target/scala-2.10/
> >>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> >>>> spark-connector/geode-assembly/build/install/apache-
> >>>> geode/lib/geode-dependencies.jar
> >>>>
> >>>> All WrappedArray(27897, 20289).length servers have been started
> >>>> Deploying:geode-functions_2.10-0.5.0.jar
> >>>> 16/09/23 16:12:09 WARN SparkContext: Another SparkContext is being
> >>>> constructed (or threw an exception in its constructor).  This may
> >>>> indicate
> >>>> an error, since only one SparkContext may be running in this JVM (see
> >>>> SPARK-2243). The other SparkContext was created at:
> >>>> org.apache.spark.api.java.JavaSparkContext.<init>(
> >>>> JavaSparkContext.scala:61)
> >>>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
> >>>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
> >>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
> >>>> ssorImpl.java:
> >>>> 62)
> >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> >>>> DelegatingMethodAccessorImpl.java:43)
> >>>> java.lang.reflect.Method.invoke(Method.java:497)
> >>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
> >>>> FrameworkMethod.java:47)
> >>>> org.junit.internal.runners.model.ReflectiveCallable.run(
> >>>> ReflectiveCallable.java:12)
> >>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
> >>>> FrameworkMethod.java:44)
> >>>> org.junit.internal.runners.statements.RunBefores.
> >>>> evaluate(RunBefores.java:24)
> >>>> org.junit.internal.runners.statements.RunAfters.evaluate(
> >>>> RunAfters.java:27)
> >>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> >>>> org.junit.runners.Suite.runChild(Suite.java:127)
> >>>> org.junit.runners.Suite.runChild(Suite.java:26)
> >>>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> >>>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> >>>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> >>>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> >>>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> >>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> >>>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to
> run
> >>>> a
> >>>> suite with class name: ittest.org.apache.geode.spark.
> >>>> connector.BasicIntegrationTest
> >>>> *** ABORTED *** [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only
> one
> >>>> SparkContext may be running in this JVM (see SPARK-2243). To ignore
> this
> >>>> error, set spark.driver.allowMultipleContexts = true. The currently
> >>>> running SparkContext was created at: [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkCont
> >>>> ext.<init>(SparkContext.scala:80)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> >>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
> >>>> ionTest.scala:50)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
> >>>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> >>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
> >>>> ionTest.scala:30)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAft
> >>>> erAll$class.run(BeforeAndAfterAll.scala:253)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> >>>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.
> scala:30)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
> >>>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framew
> >>>> ork$ScalaTestTask.execute(Framework.scala:671)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> >>>> FutureTask.run(FutureTask.java:266) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> >>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> >>>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m
> >>>> [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
> >>>> kContext.scala:1811)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
> >>>> kContext.scala:1807)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.
> scala:236)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m
> [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> $anonfun$
> >>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m
> [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.
> scala:236)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
> >>>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> >>>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.
> >>>> <init>(SparkContext.scala:89)
> >>>> [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.
> >>>> connector.
> >>>> BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58) [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
> >>>> [0m[ [0minfo [0m]  [0mScalaTest [0m
> >>>> [0m[ [0minfo [0m]  [0m [36mRun completed in 1 minute, 59 seconds. [0m
> >>>> [0m
> >>>> [0m[ [0minfo [0m]  [0m [36mTotal number of tests run: 0 [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [36mSuites: completed 1, aborted 3 [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [36mTests: succeeded 0, failed 0, canceled 0,
> >>>> ignored 0, pending 0 [0m [0m
> >>>> [0m[ [0minfo [0m]  [0m [31m*** 3 SUITES ABORTED *** [0m [0m
> >>>> [0m[ [31merror [0m]  [0mError: Total 3, Failed 0, Errors 3, Passed 0
> [0m
> >>>> [0m[ [31merror [0m]  [0mError during tests: [0m
> >>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.
> >>>> spark.connector.RDDJoinRegionIntegrationTest
> >>>> [0m
> >>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.
> >>>> spark.connector.RetrieveRegionIntegrationTest
> >>>> [0m
> >>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.
> connector.
> >>>> BasicIntegrationTest
> >>>> [0m
> >>>> [0m[ [31merror [0m]  [0m(geode-spark-connector/it: [31mtest [0m)
> >>>> sbt.TestsFailedException: Tests unsuccessful [0m
> >>>> [0m[ [31merror [0m]  [0mTotal time: 128 s, completed Sep 23, 2016
> >>>> 4:12:09
> >>>> PM [0m
> >>>> Build step 'Execute shell' marked build as failure
> >>>> Recording test results
> >>>> Skipped archiving because build is not successful
> >>>>
> >>>>
> >
>

Re: Build failed in Jenkins: Geode-spark-connector #78

Posted by Kirk Lund <kl...@apache.org>.
org.apache.geode.internal.lang.StringUtils includes isEmpty(String)

-Kirk

On Friday, September 23, 2016, Udo Kohlmeyer <uk...@pivotal.io> wrote:

> I can easily fix this.
>
> Sure we have a utility lying around in the core that can handle
> "String.isEmpty"
>
> --Udo
>
>
> On 24/09/2016 9:56 AM, Anthony Baker wrote:
>
>> Yep, I’m seeing failures on any client app that doesn’t explicitly
>> include spring as dependency.
>>
>> Exception in thread "main" java.lang.NoClassDefFoundError:
>> org/springframework/util/StringUtils
>>         at org.apache.geode.internal.net.SSLConfigurationFactory.config
>> ureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:274)
>>         at org.apache.geode.internal.net.SSLConfigurationFactory.config
>> ureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:270)
>>         at org.apache.geode.internal.net.SSLConfigurationFactory.create
>> SSLConfigForComponent(SSLConfigurationFactory.java:138)
>>         at org.apache.geode.internal.net.SSLConfigurationFactory.getSSL
>> ConfigForComponent(SSLConfigurationFactory.java:67)
>>         at org.apache.geode.internal.net.SocketCreatorFactory.getSocket
>> CreatorForComponent(SocketCreatorFactory.java:67)
>>         at org.apache.geode.distributed.internal.tcpserver.TcpClient.<i
>> nit>(TcpClient.java:69)
>>         at org.apache.geode.cache.client.internal.AutoConnectionSourceI
>> mpl.<init>(AutoConnectionSourceImpl.java:114)
>>         at org.apache.geode.cache.client.internal.PoolImpl.getSourceImp
>> l(PoolImpl.java:579)
>>         at org.apache.geode.cache.client.internal.PoolImpl.<init>(PoolI
>> mpl.java:219)
>>         at org.apache.geode.cache.client.internal.PoolImpl.create(PoolI
>> mpl.java:132)
>>         at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolF
>> actoryImpl.java:319)
>>         at org.apache.geode.internal.cache.GemFireCacheImpl.determineDe
>> faultPool(GemFireCacheImpl.java:2943)
>>         at org.apache.geode.internal.cache.GemFireCacheImpl.initializeD
>> eclarativeCache(GemFireCacheImpl.java:1293)
>>         at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(
>> GemFireCacheImpl.java:1124)
>>         at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate
>> (GemFireCacheImpl.java:765)
>>         at org.apache.geode.internal.cache.GemFireCacheImpl.createClien
>> t(GemFireCacheImpl.java:740)
>>         at org.apache.geode.cache.client.ClientCacheFactory.basicCreate
>> (ClientCacheFactory.java:235)
>>         at org.apache.geode.cache.client.ClientCacheFactory.create(Clie
>> ntCacheFactory.java:189)
>>         at HelloWorld.main(HelloWorld.java:25)
>> Caused by: java.lang.ClassNotFoundException:
>> org.springframework.util.StringUtils
>>         at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>         at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
>>         at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>         ... 19 more
>>
>> Anthony
>>
>>
>> On Sep 23, 2016, at 4:34 PM, Dan Smith <ds...@pivotal.io> wrote:
>>>
>>> I created GEODE-1934 for this. It looks like the problem is actually that
>>> our dependencies for geode-core are messed up. spring-core is marked
>>> optional, but we're using it in critical places like this
>>> SSLConfigurationFactory.
>>>
>>> In my opinion we shouldn't depend on spring-core at all unless we're
>>> actually going to use it for things other than StringUtils. I think we've
>>> accidentally introduced dependencies on it because the gfsh code in the
>>> core is pulling in a bunch of spring libraries.
>>>
>>> -Dan
>>>
>>>
>>> On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
>>> jenkins@builds.apache.org> wrote:
>>>
>>> See <https://builds.apache.org/job/Geode-spark-connector/78/changes>
>>>>
>>>> Changes:
>>>>
>>>> [hkhamesra] GEODE-37 In spark connector we call TcpClient static method
>>>> to
>>>> get the
>>>>
>>>> [klund] GEODE-1906: fix misspelling of Successfully
>>>>
>>>> [upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators
>>>> with gateways
>>>>
>>>> ------------------------------------------
>>>> [...truncated 1883 lines...]
>>>> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
>>>> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
>>>> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
>>>> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
>>>> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
>>>> server' on port 40135.
>>>> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
>>>> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
>>>> 4040. Attempting port 4041.
>>>> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on
>>>> port 4041.
>>>> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at
>>>> http://localhost:4041
>>>> 16/09/23 16:11:15 INFO Executor: Starting executor ID <driver> on host
>>>> localhost
>>>> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
>>>> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
>>>> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on
>>>> 41182
>>>> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register
>>>> BlockManager
>>>> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block
>>>> manager
>>>> localhost:41182 with 2.8 GB RAM, BlockManagerId(<driver>, localhost,
>>>> 41182)
>>>> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
>>>> === GeodeRunner: stop server 1.
>>>> === GeodeRunner: stop server 2.
>>>> [0m[ [0minfo [0m]  [0m [32mRetrieveRegionIntegrationTest: [0m [0m
>>>> ......
>>>>
>>>> === GeodeRunner: stop locator
>>>> ...
>>>> Successfully stop Geode locator at port 27662.
>>>> === GeodeRunner: starting locator on port 23825
>>>> === GeodeRunner: waiting for locator on port 23825
>>>> ....=== GeodeRunner: done waiting for locator on port 23825
>>>> === GeodeRunner: starting server1 with clientPort 28993
>>>> === GeodeRunner: starting server2 with clientPort 26318
>>>> === GeodeRunner: starting server3 with clientPort 29777
>>>> === GeodeRunner: starting server4 with clientPort 22946
>>>> ....
>>>> ............................................Locator in
>>>> /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/target/testgeode/locator on
>>>> hemera.apache.org[23825] as locator is currently online.
>>>> Process ID: 1860
>>>> Uptime: 4 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> locator/locator.log
>>>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
>>>> -Dgemfire.load-cluster-configuration-from-dir=false
>>>> -Dgemfire.jmx-manager-http-port=29684 -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/
>>>> lib/geode-dependencies.jar
>>>>
>>>> Successfully connected to: JMX Manager [host=hemera.apache.org,
>>>> port=1099]
>>>>
>>>> Cluster configuration service is up and running.
>>>>
>>>> ................
>>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/
>>>> target/testgeode/server4
>>>> on hemera.apache.org[22946] as server4 is currently online.
>>>> Process ID: 2204
>>>> Uptime: 8 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> server4/server4.log
>>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
>>>> -Dgemfire.use-cluster-configuration=true
>>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/src/it/resources/test-
>>>> retrieve-regions.xml
>>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/./
>>>> target/scala-2.10/
>>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>>> spark-connector/geode-assembly/build/install/apache-
>>>> geode/lib/geode-dependencies.jar
>>>>
>>>> ..
>>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/
>>>> target/testgeode/server1
>>>> on hemera.apache.org[28993] as server1 is currently online.
>>>> Process ID: 2199
>>>> Uptime: 8 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> server1/server1.log
>>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
>>>> -Dgemfire.use-cluster-configuration=true
>>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/src/it/resources/test-
>>>> retrieve-regions.xml
>>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/./
>>>> target/scala-2.10/
>>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>>> spark-connector/geode-assembly/build/install/apache-
>>>> geode/lib/geode-dependencies.jar
>>>>
>>>>
>>>>
>>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/
>>>> target/testgeode/server2
>>>> on hemera.apache.org[26318] as server2 is currently online.
>>>> Process ID: 2153
>>>> Uptime: 9 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> server2/server2.log
>>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
>>>> -Dgemfire.use-cluster-configuration=true
>>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/src/it/resources/test-
>>>> retrieve-regions.xml
>>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/./
>>>> target/scala-2.10/
>>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>>> spark-connector/geode-assembly/build/install/apache-
>>>> geode/lib/geode-dependencies.jar
>>>>
>>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/
>>>> target/testgeode/server3
>>>> on hemera.apache.org[29777] as server3 is currently online.
>>>> Process ID: 2175
>>>> Uptime: 9 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> server3/server3.log
>>>> JVM Arguments: -Dgemfire.locators=localhost[23825]
>>>> -Dgemfire.use-cluster-configuration=true
>>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/src/it/resources/test-
>>>> retrieve-regions.xml
>>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/./
>>>> target/scala-2.10/
>>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>>> spark-connector/geode-assembly/build/install/apache-
>>>> geode/lib/geode-dependencies.jar
>>>>
>>>> All WrappedArray(28993, 26318, 29777, 22946).length servers have been
>>>> started
>>>> Deploying:geode-functions_2.10-0.5.0.jar
>>>> 16/09/23 16:11:43 WARN SparkContext: Another SparkContext is being
>>>> constructed (or threw an exception in its constructor).  This may
>>>> indicate
>>>> an error, since only one SparkContext may be running in this JVM (see
>>>> SPARK-2243). The other SparkContext was created at:
>>>> org.apache.spark.api.java.JavaSparkContext.<init>(
>>>> JavaSparkContext.scala:61)
>>>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
>>>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:
>>>> 62)
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>> DelegatingMethodAccessorImpl.java:43)
>>>> java.lang.reflect.Method.invoke(Method.java:497)
>>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>>> FrameworkMethod.java:47)
>>>> org.junit.internal.runners.model.ReflectiveCallable.run(
>>>> ReflectiveCallable.java:12)
>>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
>>>> FrameworkMethod.java:44)
>>>> org.junit.internal.runners.statements.RunBefores.
>>>> evaluate(RunBefores.java:24)
>>>> org.junit.internal.runners.statements.RunAfters.evaluate(
>>>> RunAfters.java:27)
>>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>>> org.junit.runners.Suite.runChild(Suite.java:127)
>>>> org.junit.runners.Suite.runChild(Suite.java:26)
>>>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>>>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>>>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>>>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>>>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run
>>>> a
>>>> suite with class name: ittest.org.apache.geode.spark.
>>>> connector.RetrieveRegionIntegrationTest
>>>> *** ABORTED *** [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
>>>> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
>>>> error, set spark.driver.allowMultipleContexts = true. The currently
>>>> running SparkContext was created at: [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkCont
>>>> ext.<init>(SparkContext.scala:80)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
>>>> ionTest.scala:50)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
>>>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
>>>> ionTest.scala:30)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAft
>>>> erAll$class.run(BeforeAndAfterAll.scala:253)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
>>>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framew
>>>> ork$ScalaTestTask.execute(Framework.scala:671)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>>> FutureTask.run(FutureTask.java:266) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m
>>>> [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
>>>> kContext.scala:1811)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
>>>> kContext.scala:1807)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
>>>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
>>>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.
>>>> <init>(SparkContext.scala:89)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.
>>>> connector.
>>>> RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegr
>>>> ationTest.scala:51)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [32mBasicIntegrationTest: [0m [0m
>>>> === GeodeRunner: stop server 1.
>>>> === GeodeRunner: stop server 2.
>>>> === GeodeRunner: stop server 3.
>>>> === GeodeRunner: stop server 4.
>>>> ............
>>>>
>>>>
>>>>
>>>> === GeodeRunner: stop locator
>>>> ...
>>>> Successfully stop Geode locator at port 23825.
>>>> === GeodeRunner: starting locator on port 23573
>>>> === GeodeRunner: waiting for locator on port 23573
>>>> ....=== GeodeRunner: done waiting for locator on port 23573
>>>> === GeodeRunner: starting server1 with clientPort 27897
>>>> === GeodeRunner: starting server2 with clientPort 20289
>>>> ....
>>>> ....................Locator in /x1/jenkins/jenkins-slave/
>>>> workspace/Geode-spark-connector/geode-spark-connector/geode-spark-
>>>> connector/target/testgeode/locator on hemera.apache.org[23573] as
>>>> locator
>>>> is currently online.
>>>> Process ID: 3273
>>>> Uptime: 4 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> locator/locator.log
>>>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
>>>> -Dgemfire.load-cluster-configuration-from-dir=false
>>>> -Dgemfire.jmx-manager-http-port=23053 -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/
>>>> lib/geode-dependencies.jar
>>>>
>>>> Successfully connected to: JMX Manager [host=hemera.apache.org,
>>>> port=1099]
>>>>
>>>> Cluster configuration service is up and running.
>>>>
>>>> ........
>>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/
>>>> target/testgeode/server2
>>>> on hemera.apache.org[20289] as server2 is currently online.
>>>> Process ID: 3465
>>>> Uptime: 7 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> server2/server2.log
>>>> JVM Arguments: -Dgemfire.locators=localhost[23573]
>>>> -Dgemfire.use-cluster-configuration=true
>>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/src/it/resources/test-regions.xml
>>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/./
>>>> target/scala-2.10/
>>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>>> spark-connector/geode-assembly/build/install/apache-
>>>> geode/lib/geode-dependencies.jar
>>>>
>>>>
>>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/
>>>> target/testgeode/server1
>>>> on hemera.apache.org[27897] as server1 is currently online.
>>>> Process ID: 3505
>>>> Uptime: 7 seconds
>>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>>> Java Version: 1.8.0_66
>>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>>> server1/server1.log
>>>> JVM Arguments: -Dgemfire.locators=localhost[23573]
>>>> -Dgemfire.use-cluster-configuration=true
>>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>>> connector/geode-spark-connector/src/it/resources/test-regions.xml
>>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSig
>>>> nalHandlers=true
>>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>>> gcInterval=9223372036854775806
>>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>>> connector/geode-assembly/build/install/apache-geode/lib/
>>>> geode-core-1.0.0-
>>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/
>>>> Geode-spark-
>>>> connector/geode-spark-connector/geode-spark-connector/./
>>>> target/scala-2.10/
>>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>>> spark-connector/geode-assembly/build/install/apache-
>>>> geode/lib/geode-dependencies.jar
>>>>
>>>> All WrappedArray(27897, 20289).length servers have been started
>>>> Deploying:geode-functions_2.10-0.5.0.jar
>>>> 16/09/23 16:12:09 WARN SparkContext: Another SparkContext is being
>>>> constructed (or threw an exception in its constructor).  This may
>>>> indicate
>>>> an error, since only one SparkContext may be running in this JVM (see
>>>> SPARK-2243). The other SparkContext was created at:
>>>> org.apache.spark.api.java.JavaSparkContext.<init>(
>>>> JavaSparkContext.scala:61)
>>>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
>>>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
>>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:
>>>> 62)
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>> DelegatingMethodAccessorImpl.java:43)
>>>> java.lang.reflect.Method.invoke(Method.java:497)
>>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>>> FrameworkMethod.java:47)
>>>> org.junit.internal.runners.model.ReflectiveCallable.run(
>>>> ReflectiveCallable.java:12)
>>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
>>>> FrameworkMethod.java:44)
>>>> org.junit.internal.runners.statements.RunBefores.
>>>> evaluate(RunBefores.java:24)
>>>> org.junit.internal.runners.statements.RunAfters.evaluate(
>>>> RunAfters.java:27)
>>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>>> org.junit.runners.Suite.runChild(Suite.java:127)
>>>> org.junit.runners.Suite.runChild(Suite.java:26)
>>>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>>>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>>>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>>>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>>>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run
>>>> a
>>>> suite with class name: ittest.org.apache.geode.spark.
>>>> connector.BasicIntegrationTest
>>>> *** ABORTED *** [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
>>>> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
>>>> error, set spark.driver.allowMultipleContexts = true. The currently
>>>> running SparkContext was created at: [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkCont
>>>> ext.<init>(SparkContext.scala:80)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
>>>> ionTest.scala:50)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
>>>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrat
>>>> ionTest.scala:30)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAft
>>>> erAll$class.run(BeforeAndAfterAll.scala:253)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
>>>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framew
>>>> ork$ScalaTestTask.execute(Framework.scala:671)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>>> FutureTask.run(FutureTask.java:266) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m
>>>> [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
>>>> kContext.scala:1811)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(Spar
>>>> kContext.scala:1807)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
>>>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
>>>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.
>>>> <init>(SparkContext.scala:89)
>>>> [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.
>>>> connector.
>>>> BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58) [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>>>> [0m[ [0minfo [0m]  [0mScalaTest [0m
>>>> [0m[ [0minfo [0m]  [0m [36mRun completed in 1 minute, 59 seconds. [0m
>>>> [0m
>>>> [0m[ [0minfo [0m]  [0m [36mTotal number of tests run: 0 [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [36mSuites: completed 1, aborted 3 [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [36mTests: succeeded 0, failed 0, canceled 0,
>>>> ignored 0, pending 0 [0m [0m
>>>> [0m[ [0minfo [0m]  [0m [31m*** 3 SUITES ABORTED *** [0m [0m
>>>> [0m[ [31merror [0m]  [0mError: Total 3, Failed 0, Errors 3, Passed 0 [0m
>>>> [0m[ [31merror [0m]  [0mError during tests: [0m
>>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.
>>>> spark.connector.RDDJoinRegionIntegrationTest
>>>> [0m
>>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.
>>>> spark.connector.RetrieveRegionIntegrationTest
>>>> [0m
>>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.
>>>> BasicIntegrationTest
>>>> [0m
>>>> [0m[ [31merror [0m]  [0m(geode-spark-connector/it: [31mtest [0m)
>>>> sbt.TestsFailedException: Tests unsuccessful [0m
>>>> [0m[ [31merror [0m]  [0mTotal time: 128 s, completed Sep 23, 2016
>>>> 4:12:09
>>>> PM [0m
>>>> Build step 'Execute shell' marked build as failure
>>>> Recording test results
>>>> Skipped archiving because build is not successful
>>>>
>>>>
>

Re: Build failed in Jenkins: Geode-spark-connector #78

Posted by Udo Kohlmeyer <uk...@pivotal.io>.
I can easily fix this.

Sure we have a utility lying around in the core that can handle 
"String.isEmpty"

--Udo


On 24/09/2016 9:56 AM, Anthony Baker wrote:
> Yep, I\u2019m seeing failures on any client app that doesn\u2019t explicitly include spring as dependency.
>
> Exception in thread "main" java.lang.NoClassDefFoundError: org/springframework/util/StringUtils
> 	at org.apache.geode.internal.net.SSLConfigurationFactory.configureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:274)
> 	at org.apache.geode.internal.net.SSLConfigurationFactory.configureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:270)
> 	at org.apache.geode.internal.net.SSLConfigurationFactory.createSSLConfigForComponent(SSLConfigurationFactory.java:138)
> 	at org.apache.geode.internal.net.SSLConfigurationFactory.getSSLConfigForComponent(SSLConfigurationFactory.java:67)
> 	at org.apache.geode.internal.net.SocketCreatorFactory.getSocketCreatorForComponent(SocketCreatorFactory.java:67)
> 	at org.apache.geode.distributed.internal.tcpserver.TcpClient.<init>(TcpClient.java:69)
> 	at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.<init>(AutoConnectionSourceImpl.java:114)
> 	at org.apache.geode.cache.client.internal.PoolImpl.getSourceImpl(PoolImpl.java:579)
> 	at org.apache.geode.cache.client.internal.PoolImpl.<init>(PoolImpl.java:219)
> 	at org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:132)
> 	at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:319)
> 	at org.apache.geode.internal.cache.GemFireCacheImpl.determineDefaultPool(GemFireCacheImpl.java:2943)
> 	at org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1293)
> 	at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1124)
> 	at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:765)
> 	at org.apache.geode.internal.cache.GemFireCacheImpl.createClient(GemFireCacheImpl.java:740)
> 	at org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:235)
> 	at org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:189)
> 	at HelloWorld.main(HelloWorld.java:25)
> Caused by: java.lang.ClassNotFoundException: org.springframework.util.StringUtils
> 	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> 	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> 	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> 	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> 	... 19 more
>
> Anthony
>
>
>> On Sep 23, 2016, at 4:34 PM, Dan Smith <ds...@pivotal.io> wrote:
>>
>> I created GEODE-1934 for this. It looks like the problem is actually that
>> our dependencies for geode-core are messed up. spring-core is marked
>> optional, but we're using it in critical places like this
>> SSLConfigurationFactory.
>>
>> In my opinion we shouldn't depend on spring-core at all unless we're
>> actually going to use it for things other than StringUtils. I think we've
>> accidentally introduced dependencies on it because the gfsh code in the
>> core is pulling in a bunch of spring libraries.
>>
>> -Dan
>>
>>
>> On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
>> jenkins@builds.apache.org> wrote:
>>
>>> See <https://builds.apache.org/job/Geode-spark-connector/78/changes>
>>>
>>> Changes:
>>>
>>> [hkhamesra] GEODE-37 In spark connector we call TcpClient static method to
>>> get the
>>>
>>> [klund] GEODE-1906: fix misspelling of Successfully
>>>
>>> [upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators
>>> with gateways
>>>
>>> ------------------------------------------
>>> [...truncated 1883 lines...]
>>> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
>>> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
>>> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
>>> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
>>> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
>>> server' on port 40135.
>>> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
>>> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
>>> 4040. Attempting port 4041.
>>> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on
>>> port 4041.
>>> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
>>> 16/09/23 16:11:15 INFO Executor: Starting executor ID <driver> on host
>>> localhost
>>> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
>>> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
>>> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
>>> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
>>> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager
>>> localhost:41182 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 41182)
>>> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
>>> === GeodeRunner: stop server 1.
>>> === GeodeRunner: stop server 2.
>>> [0m[ [0minfo [0m]  [0m [32mRetrieveRegionIntegrationTest: [0m [0m
>>> ......
>>>
>>> === GeodeRunner: stop locator
>>> ...
>>> Successfully stop Geode locator at port 27662.
>>> === GeodeRunner: starting locator on port 23825
>>> === GeodeRunner: waiting for locator on port 23825
>>> ....=== GeodeRunner: done waiting for locator on port 23825
>>> === GeodeRunner: starting server1 with clientPort 28993
>>> === GeodeRunner: starting server2 with clientPort 26318
>>> === GeodeRunner: starting server3 with clientPort 29777
>>> === GeodeRunner: starting server4 with clientPort 22946
>>> ....
>>> ............................................Locator in
>>> /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/target/testgeode/locator on
>>> hemera.apache.org[23825] as locator is currently online.
>>> Process ID: 1860
>>> Uptime: 4 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> locator/locator.log
>>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
>>> -Dgemfire.load-cluster-configuration-from-dir=false
>>> -Dgemfire.jmx-manager-http-port=29684 -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/
>>> lib/geode-dependencies.jar
>>>
>>> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>>>
>>> Cluster configuration service is up and running.
>>>
>>> ................
>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4
>>> on hemera.apache.org[22946] as server4 is currently online.
>>> Process ID: 2204
>>> Uptime: 8 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> server4/server4.log
>>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>> spark-connector/geode-assembly/build/install/apache-
>>> geode/lib/geode-dependencies.jar
>>>
>>> ..
>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
>>> on hemera.apache.org[28993] as server1 is currently online.
>>> Process ID: 2199
>>> Uptime: 8 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> server1/server1.log
>>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>> spark-connector/geode-assembly/build/install/apache-
>>> geode/lib/geode-dependencies.jar
>>>
>>>
>>>
>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
>>> on hemera.apache.org[26318] as server2 is currently online.
>>> Process ID: 2153
>>> Uptime: 9 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> server2/server2.log
>>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>> spark-connector/geode-assembly/build/install/apache-
>>> geode/lib/geode-dependencies.jar
>>>
>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3
>>> on hemera.apache.org[29777] as server3 is currently online.
>>> Process ID: 2175
>>> Uptime: 9 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> server3/server3.log
>>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>> spark-connector/geode-assembly/build/install/apache-
>>> geode/lib/geode-dependencies.jar
>>>
>>> All WrappedArray(28993, 26318, 29777, 22946).length servers have been
>>> started
>>> Deploying:geode-functions_2.10-0.5.0.jar
>>> 16/09/23 16:11:43 WARN SparkContext: Another SparkContext is being
>>> constructed (or threw an exception in its constructor).  This may indicate
>>> an error, since only one SparkContext may be running in this JVM (see
>>> SPARK-2243). The other SparkContext was created at:
>>> org.apache.spark.api.java.JavaSparkContext.<init>(
>>> JavaSparkContext.scala:61)
>>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
>>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
>>> 62)
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>> DelegatingMethodAccessorImpl.java:43)
>>> java.lang.reflect.Method.invoke(Method.java:497)
>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>> FrameworkMethod.java:47)
>>> org.junit.internal.runners.model.ReflectiveCallable.run(
>>> ReflectiveCallable.java:12)
>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
>>> FrameworkMethod.java:44)
>>> org.junit.internal.runners.statements.RunBefores.
>>> evaluate(RunBefores.java:24)
>>> org.junit.internal.runners.statements.RunAfters.evaluate(
>>> RunAfters.java:27)
>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>> org.junit.runners.Suite.runChild(Suite.java:127)
>>> org.junit.runners.Suite.runChild(Suite.java:26)
>>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run a
>>> suite with class name: ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
>>> *** ABORTED *** [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
>>> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
>>> error, set spark.driver.allowMultipleContexts = true. The currently
>>> running SparkContext was created at: [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkContext.<init>(SparkContext.scala:80)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
>>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
>>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>> FutureTask.run(FutureTask.java:266) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
>>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
>>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.connector.
>>> RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>>> [0m[ [0minfo [0m]  [0m [32mBasicIntegrationTest: [0m [0m
>>> === GeodeRunner: stop server 1.
>>> === GeodeRunner: stop server 2.
>>> === GeodeRunner: stop server 3.
>>> === GeodeRunner: stop server 4.
>>> ............
>>>
>>>
>>>
>>> === GeodeRunner: stop locator
>>> ...
>>> Successfully stop Geode locator at port 23825.
>>> === GeodeRunner: starting locator on port 23573
>>> === GeodeRunner: waiting for locator on port 23573
>>> ....=== GeodeRunner: done waiting for locator on port 23573
>>> === GeodeRunner: starting server1 with clientPort 27897
>>> === GeodeRunner: starting server2 with clientPort 20289
>>> ....
>>> ....................Locator in /x1/jenkins/jenkins-slave/
>>> workspace/Geode-spark-connector/geode-spark-connector/geode-spark-
>>> connector/target/testgeode/locator on hemera.apache.org[23573] as locator
>>> is currently online.
>>> Process ID: 3273
>>> Uptime: 4 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> locator/locator.log
>>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
>>> -Dgemfire.load-cluster-configuration-from-dir=false
>>> -Dgemfire.jmx-manager-http-port=23053 -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/
>>> lib/geode-dependencies.jar
>>>
>>> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>>>
>>> Cluster configuration service is up and running.
>>>
>>> ........
>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
>>> on hemera.apache.org[20289] as server2 is currently online.
>>> Process ID: 3465
>>> Uptime: 7 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> server2/server2.log
>>> JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true
>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/src/it/resources/test-regions.xml
>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>> spark-connector/geode-assembly/build/install/apache-
>>> geode/lib/geode-dependencies.jar
>>>
>>>
>>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
>>> on hemera.apache.org[27897] as server1 is currently online.
>>> Process ID: 3505
>>> Uptime: 7 seconds
>>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>>> Java Version: 1.8.0_66
>>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>>> server1/server1.log
>>> JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true
>>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>>> connector/geode-spark-connector/src/it/resources/test-regions.xml
>>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>>> gcInterval=9223372036854775806
>>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>>> spark-connector/geode-assembly/build/install/apache-
>>> geode/lib/geode-dependencies.jar
>>>
>>> All WrappedArray(27897, 20289).length servers have been started
>>> Deploying:geode-functions_2.10-0.5.0.jar
>>> 16/09/23 16:12:09 WARN SparkContext: Another SparkContext is being
>>> constructed (or threw an exception in its constructor).  This may indicate
>>> an error, since only one SparkContext may be running in this JVM (see
>>> SPARK-2243). The other SparkContext was created at:
>>> org.apache.spark.api.java.JavaSparkContext.<init>(
>>> JavaSparkContext.scala:61)
>>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
>>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
>>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
>>> 62)
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>> DelegatingMethodAccessorImpl.java:43)
>>> java.lang.reflect.Method.invoke(Method.java:497)
>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>> FrameworkMethod.java:47)
>>> org.junit.internal.runners.model.ReflectiveCallable.run(
>>> ReflectiveCallable.java:12)
>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
>>> FrameworkMethod.java:44)
>>> org.junit.internal.runners.statements.RunBefores.
>>> evaluate(RunBefores.java:24)
>>> org.junit.internal.runners.statements.RunAfters.evaluate(
>>> RunAfters.java:27)
>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>> org.junit.runners.Suite.runChild(Suite.java:127)
>>> org.junit.runners.Suite.runChild(Suite.java:26)
>>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run a
>>> suite with class name: ittest.org.apache.geode.spark.connector.BasicIntegrationTest
>>> *** ABORTED *** [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
>>> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
>>> error, set spark.driver.allowMultipleContexts = true. The currently
>>> running SparkContext was created at: [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkContext.<init>(SparkContext.scala:80)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
>>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
>>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>> FutureTask.run(FutureTask.java:266) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
>>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
>>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
>>> [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.connector.
>>> BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58) [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>>> [0m[ [0minfo [0m]  [0mScalaTest [0m
>>> [0m[ [0minfo [0m]  [0m [36mRun completed in 1 minute, 59 seconds. [0m [0m
>>> [0m[ [0minfo [0m]  [0m [36mTotal number of tests run: 0 [0m [0m
>>> [0m[ [0minfo [0m]  [0m [36mSuites: completed 1, aborted 3 [0m [0m
>>> [0m[ [0minfo [0m]  [0m [36mTests: succeeded 0, failed 0, canceled 0,
>>> ignored 0, pending 0 [0m [0m
>>> [0m[ [0minfo [0m]  [0m [31m*** 3 SUITES ABORTED *** [0m [0m
>>> [0m[ [31merror [0m]  [0mError: Total 3, Failed 0, Errors 3, Passed 0 [0m
>>> [0m[ [31merror [0m]  [0mError during tests: [0m
>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
>>> [0m
>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
>>> [0m
>>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.BasicIntegrationTest
>>> [0m
>>> [0m[ [31merror [0m]  [0m(geode-spark-connector/it: [31mtest [0m)
>>> sbt.TestsFailedException: Tests unsuccessful [0m
>>> [0m[ [31merror [0m]  [0mTotal time: 128 s, completed Sep 23, 2016 4:12:09
>>> PM [0m
>>> Build step 'Execute shell' marked build as failure
>>> Recording test results
>>> Skipped archiving because build is not successful
>>>


Re: Build failed in Jenkins: Geode-spark-connector #78

Posted by Anthony Baker <ab...@pivotal.io>.
Yep, I’m seeing failures on any client app that doesn’t explicitly include spring as dependency.

Exception in thread "main" java.lang.NoClassDefFoundError: org/springframework/util/StringUtils
	at org.apache.geode.internal.net.SSLConfigurationFactory.configureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:274)
	at org.apache.geode.internal.net.SSLConfigurationFactory.configureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:270)
	at org.apache.geode.internal.net.SSLConfigurationFactory.createSSLConfigForComponent(SSLConfigurationFactory.java:138)
	at org.apache.geode.internal.net.SSLConfigurationFactory.getSSLConfigForComponent(SSLConfigurationFactory.java:67)
	at org.apache.geode.internal.net.SocketCreatorFactory.getSocketCreatorForComponent(SocketCreatorFactory.java:67)
	at org.apache.geode.distributed.internal.tcpserver.TcpClient.<init>(TcpClient.java:69)
	at org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.<init>(AutoConnectionSourceImpl.java:114)
	at org.apache.geode.cache.client.internal.PoolImpl.getSourceImpl(PoolImpl.java:579)
	at org.apache.geode.cache.client.internal.PoolImpl.<init>(PoolImpl.java:219)
	at org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:132)
	at org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:319)
	at org.apache.geode.internal.cache.GemFireCacheImpl.determineDefaultPool(GemFireCacheImpl.java:2943)
	at org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1293)
	at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1124)
	at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:765)
	at org.apache.geode.internal.cache.GemFireCacheImpl.createClient(GemFireCacheImpl.java:740)
	at org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:235)
	at org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:189)
	at HelloWorld.main(HelloWorld.java:25)
Caused by: java.lang.ClassNotFoundException: org.springframework.util.StringUtils
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	... 19 more

Anthony


> On Sep 23, 2016, at 4:34 PM, Dan Smith <ds...@pivotal.io> wrote:
> 
> I created GEODE-1934 for this. It looks like the problem is actually that
> our dependencies for geode-core are messed up. spring-core is marked
> optional, but we're using it in critical places like this
> SSLConfigurationFactory.
> 
> In my opinion we shouldn't depend on spring-core at all unless we're
> actually going to use it for things other than StringUtils. I think we've
> accidentally introduced dependencies on it because the gfsh code in the
> core is pulling in a bunch of spring libraries.
> 
> -Dan
> 
> 
> On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
> jenkins@builds.apache.org> wrote:
> 
>> See <https://builds.apache.org/job/Geode-spark-connector/78/changes>
>> 
>> Changes:
>> 
>> [hkhamesra] GEODE-37 In spark connector we call TcpClient static method to
>> get the
>> 
>> [klund] GEODE-1906: fix misspelling of Successfully
>> 
>> [upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators
>> with gateways
>> 
>> ------------------------------------------
>> [...truncated 1883 lines...]
>> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
>> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
>> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
>> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
>> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
>> server' on port 40135.
>> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
>> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
>> 4040. Attempting port 4041.
>> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on
>> port 4041.
>> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
>> 16/09/23 16:11:15 INFO Executor: Starting executor ID <driver> on host
>> localhost
>> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
>> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
>> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
>> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
>> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager
>> localhost:41182 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 41182)
>> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
>> === GeodeRunner: stop server 1.
>> === GeodeRunner: stop server 2.
>> [0m[ [0minfo [0m]  [0m [32mRetrieveRegionIntegrationTest: [0m [0m
>> ......
>> 
>> === GeodeRunner: stop locator
>> ...
>> Successfully stop Geode locator at port 27662.
>> === GeodeRunner: starting locator on port 23825
>> === GeodeRunner: waiting for locator on port 23825
>> ....=== GeodeRunner: done waiting for locator on port 23825
>> === GeodeRunner: starting server1 with clientPort 28993
>> === GeodeRunner: starting server2 with clientPort 26318
>> === GeodeRunner: starting server3 with clientPort 29777
>> === GeodeRunner: starting server4 with clientPort 22946
>> ....
>> ............................................Locator in
>> /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/target/testgeode/locator on
>> hemera.apache.org[23825] as locator is currently online.
>> Process ID: 1860
>> Uptime: 4 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> locator/locator.log
>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
>> -Dgemfire.load-cluster-configuration-from-dir=false
>> -Dgemfire.jmx-manager-http-port=29684 -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/
>> lib/geode-dependencies.jar
>> 
>> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>> 
>> Cluster configuration service is up and running.
>> 
>> ................
>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4
>> on hemera.apache.org[22946] as server4 is currently online.
>> Process ID: 2204
>> Uptime: 8 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> server4/server4.log
>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>> spark-connector/geode-assembly/build/install/apache-
>> geode/lib/geode-dependencies.jar
>> 
>> ..
>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
>> on hemera.apache.org[28993] as server1 is currently online.
>> Process ID: 2199
>> Uptime: 8 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> server1/server1.log
>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>> spark-connector/geode-assembly/build/install/apache-
>> geode/lib/geode-dependencies.jar
>> 
>> 
>> 
>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
>> on hemera.apache.org[26318] as server2 is currently online.
>> Process ID: 2153
>> Uptime: 9 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> server2/server2.log
>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>> spark-connector/geode-assembly/build/install/apache-
>> geode/lib/geode-dependencies.jar
>> 
>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3
>> on hemera.apache.org[29777] as server3 is currently online.
>> Process ID: 2175
>> Uptime: 9 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> server3/server3.log
>> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>> spark-connector/geode-assembly/build/install/apache-
>> geode/lib/geode-dependencies.jar
>> 
>> All WrappedArray(28993, 26318, 29777, 22946).length servers have been
>> started
>> Deploying:geode-functions_2.10-0.5.0.jar
>> 16/09/23 16:11:43 WARN SparkContext: Another SparkContext is being
>> constructed (or threw an exception in its constructor).  This may indicate
>> an error, since only one SparkContext may be running in this JVM (see
>> SPARK-2243). The other SparkContext was created at:
>> org.apache.spark.api.java.JavaSparkContext.<init>(
>> JavaSparkContext.scala:61)
>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
>> 62)
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>> DelegatingMethodAccessorImpl.java:43)
>> java.lang.reflect.Method.invoke(Method.java:497)
>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>> FrameworkMethod.java:47)
>> org.junit.internal.runners.model.ReflectiveCallable.run(
>> ReflectiveCallable.java:12)
>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
>> FrameworkMethod.java:44)
>> org.junit.internal.runners.statements.RunBefores.
>> evaluate(RunBefores.java:24)
>> org.junit.internal.runners.statements.RunAfters.evaluate(
>> RunAfters.java:27)
>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>> org.junit.runners.Suite.runChild(Suite.java:127)
>> org.junit.runners.Suite.runChild(Suite.java:26)
>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run a
>> suite with class name: ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
>> *** ABORTED *** [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
>> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
>> error, set spark.driver.allowMultipleContexts = true. The currently
>> running SparkContext was created at: [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkContext.<init>(SparkContext.scala:80)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>> FutureTask.run(FutureTask.java:266) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.connector.
>> RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>> [0m[ [0minfo [0m]  [0m [32mBasicIntegrationTest: [0m [0m
>> === GeodeRunner: stop server 1.
>> === GeodeRunner: stop server 2.
>> === GeodeRunner: stop server 3.
>> === GeodeRunner: stop server 4.
>> ............
>> 
>> 
>> 
>> === GeodeRunner: stop locator
>> ...
>> Successfully stop Geode locator at port 23825.
>> === GeodeRunner: starting locator on port 23573
>> === GeodeRunner: waiting for locator on port 23573
>> ....=== GeodeRunner: done waiting for locator on port 23573
>> === GeodeRunner: starting server1 with clientPort 27897
>> === GeodeRunner: starting server2 with clientPort 20289
>> ....
>> ....................Locator in /x1/jenkins/jenkins-slave/
>> workspace/Geode-spark-connector/geode-spark-connector/geode-spark-
>> connector/target/testgeode/locator on hemera.apache.org[23573] as locator
>> is currently online.
>> Process ID: 3273
>> Uptime: 4 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> locator/locator.log
>> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
>> -Dgemfire.load-cluster-configuration-from-dir=false
>> -Dgemfire.jmx-manager-http-port=23053 -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/
>> lib/geode-dependencies.jar
>> 
>> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>> 
>> Cluster configuration service is up and running.
>> 
>> ........
>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
>> on hemera.apache.org[20289] as server2 is currently online.
>> Process ID: 3465
>> Uptime: 7 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> server2/server2.log
>> JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true
>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/src/it/resources/test-regions.xml
>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>> spark-connector/geode-assembly/build/install/apache-
>> geode/lib/geode-dependencies.jar
>> 
>> 
>> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
>> on hemera.apache.org[27897] as server1 is currently online.
>> Process ID: 3505
>> Uptime: 7 seconds
>> GemFire Version: 1.0.0-incubating-SNAPSHOT
>> Java Version: 1.8.0_66
>> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
>> server1/server1.log
>> JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true
>> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
>> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
>> connector/geode-spark-connector/src/it/resources/test-regions.xml
>> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
>> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
>> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
>> gcInterval=9223372036854775806
>> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
>> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
>> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
>> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
>> spark-connector/geode-assembly/build/install/apache-
>> geode/lib/geode-dependencies.jar
>> 
>> All WrappedArray(27897, 20289).length servers have been started
>> Deploying:geode-functions_2.10-0.5.0.jar
>> 16/09/23 16:12:09 WARN SparkContext: Another SparkContext is being
>> constructed (or threw an exception in its constructor).  This may indicate
>> an error, since only one SparkContext may be running in this JVM (see
>> SPARK-2243). The other SparkContext was created at:
>> org.apache.spark.api.java.JavaSparkContext.<init>(
>> JavaSparkContext.scala:61)
>> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
>> setUpBeforeClass(JavaApiIntegrationTest.java:75)
>> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
>> 62)
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(
>> DelegatingMethodAccessorImpl.java:43)
>> java.lang.reflect.Method.invoke(Method.java:497)
>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>> FrameworkMethod.java:47)
>> org.junit.internal.runners.model.ReflectiveCallable.run(
>> ReflectiveCallable.java:12)
>> org.junit.runners.model.FrameworkMethod.invokeExplosively(
>> FrameworkMethod.java:44)
>> org.junit.internal.runners.statements.RunBefores.
>> evaluate(RunBefores.java:24)
>> org.junit.internal.runners.statements.RunAfters.evaluate(
>> RunAfters.java:27)
>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>> org.junit.runners.Suite.runChild(Suite.java:127)
>> org.junit.runners.Suite.runChild(Suite.java:26)
>> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>> [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run a
>> suite with class name: ittest.org.apache.geode.spark.connector.BasicIntegrationTest
>> *** ABORTED *** [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
>> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
>> error, set spark.driver.allowMultipleContexts = true. The currently
>> running SparkContext was created at: [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkContext.<init>(SparkContext.scala:80)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
>> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
>> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
>> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>> FutureTask.run(FutureTask.java:266) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
>> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
>> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
>> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
>> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
>> [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.connector.
>> BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58) [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>> [0m[ [0minfo [0m]  [0mScalaTest [0m
>> [0m[ [0minfo [0m]  [0m [36mRun completed in 1 minute, 59 seconds. [0m [0m
>> [0m[ [0minfo [0m]  [0m [36mTotal number of tests run: 0 [0m [0m
>> [0m[ [0minfo [0m]  [0m [36mSuites: completed 1, aborted 3 [0m [0m
>> [0m[ [0minfo [0m]  [0m [36mTests: succeeded 0, failed 0, canceled 0,
>> ignored 0, pending 0 [0m [0m
>> [0m[ [0minfo [0m]  [0m [31m*** 3 SUITES ABORTED *** [0m [0m
>> [0m[ [31merror [0m]  [0mError: Total 3, Failed 0, Errors 3, Passed 0 [0m
>> [0m[ [31merror [0m]  [0mError during tests: [0m
>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
>> [0m
>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
>> [0m
>> [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.BasicIntegrationTest
>> [0m
>> [0m[ [31merror [0m]  [0m(geode-spark-connector/it: [31mtest [0m)
>> sbt.TestsFailedException: Tests unsuccessful [0m
>> [0m[ [31merror [0m]  [0mTotal time: 128 s, completed Sep 23, 2016 4:12:09
>> PM [0m
>> Build step 'Execute shell' marked build as failure
>> Recording test results
>> Skipped archiving because build is not successful
>> 


Re: Build failed in Jenkins: Geode-spark-connector #78

Posted by Dan Smith <ds...@pivotal.io>.
I created GEODE-1934 for this. It looks like the problem is actually that
our dependencies for geode-core are messed up. spring-core is marked
optional, but we're using it in critical places like this
SSLConfigurationFactory.

In my opinion we shouldn't depend on spring-core at all unless we're
actually going to use it for things other than StringUtils. I think we've
accidentally introduced dependencies on it because the gfsh code in the
core is pulling in a bunch of spring libraries.

-Dan


On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
jenkins@builds.apache.org> wrote:

> See <https://builds.apache.org/job/Geode-spark-connector/78/changes>
>
> Changes:
>
> [hkhamesra] GEODE-37 In spark connector we call TcpClient static method to
> get the
>
> [klund] GEODE-1906: fix misspelling of Successfully
>
> [upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators
> with gateways
>
> ------------------------------------------
> [...truncated 1883 lines...]
> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
> server' on port 40135.
> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
> 4040. Attempting port 4041.
> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on
> port 4041.
> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
> 16/09/23 16:11:15 INFO Executor: Starting executor ID <driver> on host
> localhost
> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager
> localhost:41182 with 2.8 GB RAM, BlockManagerId(<driver>, localhost, 41182)
> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
> === GeodeRunner: stop server 1.
> === GeodeRunner: stop server 2.
>  [0m[ [0minfo [0m]  [0m [32mRetrieveRegionIntegrationTest: [0m [0m
> ......
>
> === GeodeRunner: stop locator
> ...
> Successfully stop Geode locator at port 27662.
> === GeodeRunner: starting locator on port 23825
> === GeodeRunner: waiting for locator on port 23825
> ....=== GeodeRunner: done waiting for locator on port 23825
> === GeodeRunner: starting server1 with clientPort 28993
> === GeodeRunner: starting server2 with clientPort 26318
> === GeodeRunner: starting server3 with clientPort 29777
> === GeodeRunner: starting server4 with clientPort 22946
> ....
> ............................................Locator in
> /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/target/testgeode/locator on
> hemera.apache.org[23825] as locator is currently online.
> Process ID: 1860
> Uptime: 4 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> locator/locator.log
> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
> -Dgemfire.load-cluster-configuration-from-dir=false
> -Dgemfire.jmx-manager-http-port=29684 -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/
> lib/geode-dependencies.jar
>
> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>
> Cluster configuration service is up and running.
>
> ................
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4
> on hemera.apache.org[22946] as server4 is currently online.
> Process ID: 2204
> Uptime: 8 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server4/server4.log
> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> spark-connector/geode-assembly/build/install/apache-
> geode/lib/geode-dependencies.jar
>
> ..
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
> on hemera.apache.org[28993] as server1 is currently online.
> Process ID: 2199
> Uptime: 8 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server1/server1.log
> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> spark-connector/geode-assembly/build/install/apache-
> geode/lib/geode-dependencies.jar
>
>
>
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
> on hemera.apache.org[26318] as server2 is currently online.
> Process ID: 2153
> Uptime: 9 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server2/server2.log
> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> spark-connector/geode-assembly/build/install/apache-
> geode/lib/geode-dependencies.jar
>
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server3
> on hemera.apache.org[29777] as server3 is currently online.
> Process ID: 2175
> Uptime: 9 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server3/server3.log
> JVM Arguments: -Dgemfire.locators=localhost[23825] -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> spark-connector/geode-assembly/build/install/apache-
> geode/lib/geode-dependencies.jar
>
> All WrappedArray(28993, 26318, 29777, 22946).length servers have been
> started
> Deploying:geode-functions_2.10-0.5.0.jar
> 16/09/23 16:11:43 WARN SparkContext: Another SparkContext is being
> constructed (or threw an exception in its constructor).  This may indicate
> an error, since only one SparkContext may be running in this JVM (see
> SPARK-2243). The other SparkContext was created at:
> org.apache.spark.api.java.JavaSparkContext.<init>(
> JavaSparkContext.scala:61)
> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
> setUpBeforeClass(JavaApiIntegrationTest.java:75)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:497)
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
> FrameworkMethod.java:47)
> org.junit.internal.runners.model.ReflectiveCallable.run(
> ReflectiveCallable.java:12)
> org.junit.runners.model.FrameworkMethod.invokeExplosively(
> FrameworkMethod.java:44)
> org.junit.internal.runners.statements.RunBefores.
> evaluate(RunBefores.java:24)
> org.junit.internal.runners.statements.RunAfters.evaluate(
> RunAfters.java:27)
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> org.junit.runners.Suite.runChild(Suite.java:127)
> org.junit.runners.Suite.runChild(Suite.java:26)
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>  [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run a
> suite with class name: ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
> *** ABORTED *** [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
> error, set spark.driver.allowMultipleContexts = true. The currently
> running SparkContext was created at: [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkContext.<init>(SparkContext.scala:80)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> FutureTask.run(FutureTask.java:266) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.connector.
> RetrieveRegionIntegrationTest.beforeAll(RetrieveRegionIntegrationTest.scala:51)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>  [0m[ [0minfo [0m]  [0m [32mBasicIntegrationTest: [0m [0m
> === GeodeRunner: stop server 1.
> === GeodeRunner: stop server 2.
> === GeodeRunner: stop server 3.
> === GeodeRunner: stop server 4.
> ............
>
>
>
> === GeodeRunner: stop locator
> ...
> Successfully stop Geode locator at port 23825.
> === GeodeRunner: starting locator on port 23573
> === GeodeRunner: waiting for locator on port 23573
> ....=== GeodeRunner: done waiting for locator on port 23573
> === GeodeRunner: starting server1 with clientPort 27897
> === GeodeRunner: starting server2 with clientPort 20289
> ....
> ....................Locator in /x1/jenkins/jenkins-slave/
> workspace/Geode-spark-connector/geode-spark-connector/geode-spark-
> connector/target/testgeode/locator on hemera.apache.org[23573] as locator
> is currently online.
> Process ID: 3273
> Uptime: 4 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> locator/locator.log
> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
> -Dgemfire.load-cluster-configuration-from-dir=false
> -Dgemfire.jmx-manager-http-port=23053 -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/
> lib/geode-dependencies.jar
>
> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>
> Cluster configuration service is up and running.
>
> ........
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server2
> on hemera.apache.org[20289] as server2 is currently online.
> Process ID: 3465
> Uptime: 7 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server2/server2.log
> JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> spark-connector/geode-assembly/build/install/apache-
> geode/lib/geode-dependencies.jar
>
>
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
> on hemera.apache.org[27897] as server1 is currently online.
> Process ID: 3505
> Uptime: 7 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server1/server1.log
> JVM Arguments: -Dgemfire.locators=localhost[23573] -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/
> it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-
> spark-connector/geode-assembly/build/install/apache-
> geode/lib/geode-dependencies.jar
>
> All WrappedArray(27897, 20289).length servers have been started
> Deploying:geode-functions_2.10-0.5.0.jar
> 16/09/23 16:12:09 WARN SparkContext: Another SparkContext is being
> constructed (or threw an exception in its constructor).  This may indicate
> an error, since only one SparkContext may be running in this JVM (see
> SPARK-2243). The other SparkContext was created at:
> org.apache.spark.api.java.JavaSparkContext.<init>(
> JavaSparkContext.scala:61)
> ittest.org.apache.geode.spark.connector.JavaApiIntegrationTest.
> setUpBeforeClass(JavaApiIntegrationTest.java:75)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:497)
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
> FrameworkMethod.java:47)
> org.junit.internal.runners.model.ReflectiveCallable.run(
> ReflectiveCallable.java:12)
> org.junit.runners.model.FrameworkMethod.invokeExplosively(
> FrameworkMethod.java:44)
> org.junit.internal.runners.statements.RunBefores.
> evaluate(RunBefores.java:24)
> org.junit.internal.runners.statements.RunAfters.evaluate(
> RunAfters.java:27)
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> org.junit.runners.Suite.runChild(Suite.java:127)
> org.junit.runners.Suite.runChild(Suite.java:26)
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>  [0m[ [0minfo [0m]  [0m [31mException encountered when attempting to run a
> suite with class name: ittest.org.apache.geode.spark.connector.BasicIntegrationTest
> *** ABORTED *** [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  org.apache.spark.SparkException: Only one
> SparkContext may be running in this JVM (see SPARK-2243). To ignore this
> error, set spark.driver.allowMultipleContexts = true. The currently
> running SparkContext was created at: [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.apache.spark.SparkContext.<init>(SparkContext.scala:80)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:50)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.
> beforeAll(BeforeAndAfterAll.scala:187) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> RDDJoinRegionIntegrationTest.beforeAll(RDDJoinRegionIntegrationTest.scala:30)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mittest.org.apache.geode.spark.connector.
> RDDJoinRegionIntegrationTest.run(RDDJoinRegionIntegrationTest.scala:30)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework.org
> $scalatest$tools$Framework$$runSuite(Framework.scala:462) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31morg.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:671)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:294)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31msbt.ForkMain$Run$2.call(ForkMain.java:284)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> FutureTask.run(FutureTask.java:266) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.util.concurrent.
> ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31mjava.lang.Thread.run(Thread.java:745) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1811)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1$$anonfun$apply$10.apply(SparkContext.scala:1807)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1807) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$$anonfun$
> assertNoOtherContextIsRunning$1.apply(SparkContext.scala:1794) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at scala.Option.foreach(Option.scala:236)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$.
> assertNoOtherContextIsRunning(SparkContext.scala:1794) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext$
> .markPartiallyConstructed(SparkContext.scala:1833) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at org.apache.spark.SparkContext.<init>(SparkContext.scala:89)
> [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  at ittest.org.apache.geode.spark.connector.
> BasicIntegrationTest.beforeAll(BasicIntegrationTest.scala:58) [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m  ... [0m [0m
>  [0m[ [0minfo [0m]  [0mScalaTest [0m
>  [0m[ [0minfo [0m]  [0m [36mRun completed in 1 minute, 59 seconds. [0m [0m
>  [0m[ [0minfo [0m]  [0m [36mTotal number of tests run: 0 [0m [0m
>  [0m[ [0minfo [0m]  [0m [36mSuites: completed 1, aborted 3 [0m [0m
>  [0m[ [0minfo [0m]  [0m [36mTests: succeeded 0, failed 0, canceled 0,
> ignored 0, pending 0 [0m [0m
>  [0m[ [0minfo [0m]  [0m [31m*** 3 SUITES ABORTED *** [0m [0m
>  [0m[ [31merror [0m]  [0mError: Total 3, Failed 0, Errors 3, Passed 0 [0m
>  [0m[ [31merror [0m]  [0mError during tests: [0m
>  [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.RDDJoinRegionIntegrationTest
> [0m
>  [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.RetrieveRegionIntegrationTest
> [0m
>  [0m[ [31merror [0m]  [0m       ittest.org.apache.geode.spark.connector.BasicIntegrationTest
> [0m
>  [0m[ [31merror [0m]  [0m(geode-spark-connector/it: [31mtest [0m)
> sbt.TestsFailedException: Tests unsuccessful [0m
>  [0m[ [31merror [0m]  [0mTotal time: 128 s, completed Sep 23, 2016 4:12:09
> PM [0m
> Build step 'Execute shell' marked build as failure
> Recording test results
> Skipped archiving because build is not successful
>