You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2017/06/14 12:43:44 UTC

Build failed in Jenkins: carbondata-master-spark-2.1 #396

See <https://builds.apache.org/job/carbondata-master-spark-2.1/396/display/redirect?page=changes>

Changes:

[jackylk] Convert decimal to byte at the end of sort step when using GLOBAL_SORT.

------------------------------------------
[...truncated 311.20 KB...]
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
	at org.scalatest.tools.Runner$.main(Runner.scala:860)
	at org.scalatest.tools.Runner.main(Runner.scala)
17/06/14 05:13:58 AUDIT LoadTable: [jenkins-ubuntu1][jenkins][Thread-1]Dataload failure for default.valid_max_columns_test. Please check the logs
- test for maxcolumns option value greater than threshold value for maxcolumns
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 ERROR LoadTable: ScalaTest-main-running-TestDataLoadWithColumnsMoreThanSchema 
java.lang.RuntimeException: csv headers should be less than the max columns: 14
	at scala.sys.package$.error(package.scala:27)
	at org.apache.carbondata.spark.util.CommonUtil$.validateMaxColumns(CommonUtil.scala:403)
	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:494)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
	at org.apache.spark.sql.test.Spark2TestQueryExecutor.sql(Spark2TestQueryExecutor.scala:32)
	at org.apache.spark.sql.common.util.QueryTest.sql(QueryTest.scala:84)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6$$anonfun$apply$mcV$sp$2.apply(TestDataLoadWithColumnsMoreThanSchema.scala:106)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6$$anonfun$apply$mcV$sp$2.apply(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6.apply$mcV$sp(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6.apply(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6.apply(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.sql.common.util.CarbonFunSuite.withFixture(CarbonFunSuite.scala:41)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.org$scalatest$BeforeAndAfterAll$$super$run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
	at org.scalatest.Suite$class.run(Suite.scala:1421)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
	at org.scalatest.tools.Runner$.main(Runner.scala:860)
	at org.scalatest.tools.Runner.main(Runner.scala)
17/06/14 05:13:58 AUDIT LoadTable: [jenkins-ubuntu1][jenkins][Thread-1]Dataload failure for default.boundary_max_columns_test. Please check the logs
- test for boundary value for maxcolumns
17/06/14 05:13:58 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [boundary_max_columns_test] under database [default]
17/06/14 05:13:58 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [boundary_max_columns_test] under database [default]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 ERROR LoadTable: ScalaTest-main-running-TestDataLoadWithColumnsMoreThanSchema 
java.lang.RuntimeException: csv headers should be less than the max columns: 13
	at scala.sys.package$.error(package.scala:27)
	at org.apache.carbondata.spark.util.CommonUtil$.validateMaxColumns(CommonUtil.scala:403)
	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:494)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
	at org.apache.spark.sql.test.Spark2TestQueryExecutor.sql(Spark2TestQueryExecutor.scala:32)
	at org.apache.spark.sql.common.util.QueryTest.sql(QueryTest.scala:84)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7$$anonfun$apply$mcV$sp$3.apply(TestDataLoadWithColumnsMoreThanSchema.scala:122)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7$$anonfun$apply$mcV$sp$3.apply(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7.apply$mcV$sp(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7.apply(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7.apply(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.sql.common.util.CarbonFunSuite.withFixture(CarbonFunSuite.scala:41)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.org$scalatest$BeforeAndAfterAll$$super$run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
	at org.scalatest.Suite$class.run(Suite.scala:1421)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
	at org.scalatest.tools.Runner$.main(Runner.scala:860)
	at org.scalatest.tools.Runner.main(Runner.scala)
17/06/14 05:13:58 AUDIT LoadTable: [jenkins-ubuntu1][jenkins][Thread-1]Dataload failure for default.boundary_max_columns_test. Please check the logs
- test for maxcolumns value less than columns in 1st line of csv file
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [smart_500_de]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [smart_500_de]
17/06/14 05:14:03 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.smart_500_de
17/06/14 05:14:04 ERROR DataLoadExecutor: [Executor task launch worker-2][partitionID:default_smart_500_de_82dfe7b2-f518-4fda-9ae6-49b67e75a5f6] Data Load is partially success for table smart_500_de
17/06/14 05:14:04 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is partially successful for default.smart_500_de
- test for duplicate column name in the Fileheader options in load command
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [char_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [char_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [max_columns_value_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [max_columns_value_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [boundary_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [boundary_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [valid_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [valid_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [smart_500_de] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [smart_500_de] under database [default]
17/06/14 05:14:04 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [timestamptyenulldata]
TimestampDataTypeNullDataTest:
17/06/14 05:14:05 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [timestamptyenulldata]
17/06/14 05:14:05 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.timestamptyenulldata
17/06/14 05:14:05 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is successful for default.timestamptyenulldata
- SELECT max(dateField) FROM timestampTyeNullData where dateField is not null
- SELECT * FROM timestampTyeNullData where dateField is null
17/06/14 05:14:06 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [timestamptyenulldata] under database [default]
17/06/14 05:14:06 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [timestamptyenulldata] under database [default]
JoinWithoutDictionaryColumn:
17/06/14 05:14:07 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [mobile]
17/06/14 05:14:08 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [mobile]
17/06/14 05:14:08 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [emp]
17/06/14 05:14:13 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [emp]
17/06/14 05:14:13 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [mobile_d]
17/06/14 05:17:06 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [mobile_d]
17/06/14 05:17:06 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [emp_d]
17/06/14 05:17:06 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [emp_d]
17/06/14 05:17:06 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.mobile
17/06/14 05:17:07 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is successful for default.mobile
17/06/14 05:21:02 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.emp
17/06/14 05:38:29 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is successful for default.emp
Sending e-mails to: commits@carbondata.apache.org
ERROR: Failed to parse POMs
java.io.IOException: Backing channel 'ubuntu-us1' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)
	at com.sun.proxy.$Proxy124.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1043)
	at hudson.maven.ProcessCache$MavenProcess.call(ProcessCache.java:166)
	at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.doRun(MavenModuleSetBuild.java:873)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
	at hudson.model.Run.execute(Run.java:1728)
	at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:544)
	at hudson.model.ResourceController.execute(ResourceController.java:98)
	at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.remoting.Channel$OrderlyShutdown: hudson.remoting.ProxyException: java.util.concurrent.TimeoutException: Ping started at 1497442951336 hasn't completed by 1497443202839
	at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1129)
	at hudson.remoting.Channel$1.handle(Channel.java:527)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:83)
Caused by: Command close created at
	at hudson.remoting.Command.<init>(Command.java:60)
	at hudson.remoting.Channel$CloseCommand.<init>(Channel.java:1123)
	at hudson.remoting.Channel$CloseCommand.<init>(Channel.java:1121)
	at hudson.remoting.Channel.close(Channel.java:1281)
	at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:180)
	at hudson.remoting.PingThread.ping(PingThread.java:130)
	at hudson.remoting.PingThread.run(PingThread.java:86)
Caused by: hudson.remoting.ProxyException: java.util.concurrent.TimeoutException: Ping started at 1497442951336 hasn't completed by 1497443202839
	... 2 more
ERROR: ubuntu-us1 is offline; cannot locate JDK 1.8 (latest)
ERROR: ubuntu-us1 is offline; cannot locate Maven 3.3.9


Jenkins build is back to stable : carbondata-master-spark-2.1 #400

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/400/display/redirect?page=changes>


Jenkins build is still unstable: carbondata-master-spark-2.1 #399

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/399/display/redirect?page=changes>


Jenkins build is still unstable: carbondata-master-spark-2.1 #398

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/398/display/redirect?page=changes>


Jenkins build is unstable: carbondata-master-spark-2.1 #397

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/397/display/redirect?page=changes>