You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@carbondata.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2017/06/14 10:54:10 UTC

Build failed in Jenkins: carbondata-master-spark-2.1 #395

See <https://builds.apache.org/job/carbondata-master-spark-2.1/395/display/redirect>

------------------------------------------
[...truncated 61.35 KB...]
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 32 source files to <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/classes>
[INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormat.java>: Some input files use unchecked or unsafe operations.
[INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/main/java/org/apache/carbondata/hadoop/CarbonInputFormat.java>: Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ carbondata-hadoop ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ carbondata-hadoop ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 5 source files to <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/test-classes>
[INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/test/java/org/apache/carbondata/hadoop/test/util/StoreCreator.java>: <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/test/java/org/apache/carbondata/hadoop/test/util/StoreCreator.java> uses unchecked or unsafe operations.
[INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/test/java/org/apache/carbondata/hadoop/test/util/StoreCreator.java>: Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-hadoop ---
[INFO] Surefire report directory: <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/surefire-reports>

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Running org.apache.carbondata.hadoop.test.util.ObjectSerializationUtilTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in org.apache.carbondata.hadoop.test.util.ObjectSerializationUtilTest
Running org.apache.carbondata.hadoop.ft.CarbonInputMapperTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.789 sec - in org.apache.carbondata.hadoop.ft.CarbonInputMapperTest
Running org.apache.carbondata.hadoop.ft.InputFilesTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.282 sec - in org.apache.carbondata.hadoop.ft.InputFilesTest

Results :

Tests run: 6, Failures: 0, Errors: 0, Skipped: 0

[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-hadoop ---
[INFO] Building jar: <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/carbondata-hadoop-1.2.0-SNAPSHOT.jar>
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ carbondata-hadoop ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-hadoop ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-hadoop ---
[WARNING] sourceDirectory is not specified or does not exist value=<https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/src/main/scala>
Saving to outputFile=<https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/scalastyle-output.xml>
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 3 ms
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ carbondata-hadoop ---
[INFO] Installing <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/carbondata-hadoop-1.2.0-SNAPSHOT.jar> to /home/jenkins/jenkins-slave/maven-repositories/1/org/apache/carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.jar
[INFO] Installing <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/pom.xml> to /home/jenkins/jenkins-slave/maven-repositories/1/org/apache/carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.pom
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache CarbonData :: Spark Common 1.2.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-spark-common ---
[INFO] Deleting <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/target>
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ carbondata-spark-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ carbondata-spark-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/resources>
[INFO] Copying 0 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-scala-plugin:2.15.2:compile (default) @ carbondata-spark-common ---
[INFO] Checking for multiple versions of scala
[INFO] includes = [**/*.java,**/*.scala,]
[INFO] excludes = []
[INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/java>:-1: info: compiling
[INFO] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala>:-1: info: compiling
[INFO] Compiling 61 source files to <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/target/classes> at 1497437578380
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableAddColumnRDD.scala>:51: warning: no valid targets for annotation on value newColumns - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[INFO]     @transient newColumns: Seq[ColumnSchema],
[INFO]      ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableDropColumnRDD.scala>:49: warning: no valid targets for annotation on value newColumns - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[INFO]     @transient newColumns: Seq[ColumnSchema],
[INFO]      ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala>:52: warning: no valid targets for annotation on value sc - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[INFO]     @transient sc: SparkContext,
[INFO]      ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/CarbonScanRDD.scala>:56: warning: no valid targets for annotation on value carbonTable - it is discarded unused. You may specify targets with meta-annotations, e.g. @(transient @param)
[INFO]     @transient carbonTable: CarbonTable)
[INFO]      ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/UpdateCoalescedRDD.scala>:23: warning: imported `CoalescedRDDPartition' is permanently hidden by definition of object CoalescedRDDPartition in package rdd
[INFO] import org.apache.spark.rdd.{CoalescedRDDPartition, DataLoadPartitionCoalescer, RDD}
[INFO]                              ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/UpdateCoalescedRDD.scala>:23: warning: imported `CoalescedRDDPartition' is permanently hidden by definition of class CoalescedRDDPartition in package rdd
[INFO] import org.apache.spark.rdd.{CoalescedRDDPartition, DataLoadPartitionCoalescer, RDD}
[INFO]                              ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/UpdateCoalescedRDD.scala>:23: warning: imported `DataLoadPartitionCoalescer' is permanently hidden by definition of object DataLoadPartitionCoalescer in package rdd
[INFO] import org.apache.spark.rdd.{CoalescedRDDPartition, DataLoadPartitionCoalescer, RDD}
[INFO]                                                     ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/UpdateCoalescedRDD.scala>:23: warning: imported `DataLoadPartitionCoalescer' is permanently hidden by definition of class DataLoadPartitionCoalescer in package rdd
[INFO] import org.apache.spark.rdd.{CoalescedRDDPartition, DataLoadPartitionCoalescer, RDD}
[INFO]                                                     ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/UpdateCoalescedRDD.scala>:23: warning: imported `RDD' is permanently hidden by definition of object RDD in package rdd
[INFO] import org.apache.spark.rdd.{CoalescedRDDPartition, DataLoadPartitionCoalescer, RDD}
[INFO]                                                                                 ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/UpdateCoalescedRDD.scala>:23: warning: imported `RDD' is permanently hidden by definition of class RDD in package rdd
[INFO] import org.apache.spark.rdd.{CoalescedRDDPartition, DataLoadPartitionCoalescer, RDD}
[INFO]                                                                                 ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala>:125: warning: non-variable type argument Any in type pattern scala.collection.Map[Any,Any] is unchecked since it is eliminated by erasure
[INFO]         case m: scala.collection.Map[Any, Any] =>
[INFO]                                  ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/DataLoadPartitionCoalescer.scala>:193: warning: match may not be exhaustive.
[INFO] It would fail on the following input: None
[INFO]                 hostMapPartitionIds.get(loc) match {
[INFO]                                        ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/DataLoadPartitionCoalescer.scala>:190: warning: match may not be exhaustive.
[INFO] It would fail on the following input: None
[INFO]           partitionIdMapHosts.get(partitionId) match {
[INFO]                                  ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala>:68: warning: match may not be exhaustive.
[INFO] It would fail on the following inputs: ARRAY, BYTE, BYTE_ARRAY, FLOAT, MAP, NULL, STRUCT
[INFO]     dataType match {
[INFO]     ^
[WARNING] <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/DataTypeConverterUtil.scala>:76: warning: match may not be exhaustive.
[INFO] It would fail on the following inputs: BOOLEAN, BYTE, BYTE_ARRAY, MAP, NULL
[INFO]     dataType match {
[INFO]     ^
[INFO] #
[ERROR] # A fatal error has been detected by the Java Runtime Environment:
[INFO] #
[ERROR] #  Internal Error (output.cpp:1593), pid=8776, tid=0x00007f474955f700
[INFO] #  guarantee((int)(blk_starts[i+1] - blk_starts[i]) >= (current_offset - blk_offset)) failed: shouldn't increase block size
[INFO] #
[INFO] # JRE version: Java(TM) SE Runtime Environment (8.0_131-b11) (build 1.8.0_131-b11)
[INFO] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.131-b11 mixed mode linux-amd64 compressed oops)
[INFO] # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
[INFO] #
[ERROR] # An error report file with more information is saved as:
[INFO] # <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hs_err_pid8776.log>
[INFO] 
[ERROR] [error occurred during error reporting , id 0xb]
[INFO] 
[INFO] #
[INFO] # If you would like to submit a bug report, please visit:
[INFO] #   http://bugreport.java.com/bugreport/crash.jsp
[INFO] #
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent ........................ SUCCESS [ 11.007 s]
[INFO] Apache CarbonData :: Common ........................ SUCCESS [ 10.748 s]
[INFO] Apache CarbonData :: Core .......................... SUCCESS [02:49 min]
[INFO] Apache CarbonData :: Processing .................... SUCCESS [ 23.357 s]
[INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 23.658 s]
[INFO] Apache CarbonData :: Spark Common .................. FAILURE [ 41.645 s]
[INFO] Apache CarbonData :: Spark2 ........................ SKIPPED
[INFO] Apache CarbonData :: Spark Common Test ............. SKIPPED
[INFO] Apache CarbonData :: Assembly ...................... SKIPPED
[INFO] Apache CarbonData :: Flink Examples ................ SKIPPED
[INFO] Apache CarbonData :: Hive .......................... SKIPPED
[INFO] Apache CarbonData :: presto ........................ SKIPPED
[INFO] Apache CarbonData :: Spark2 Examples ............... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 05:09 min
[INFO] Finished at: 2017-06-14T10:53:43+00:00
[INFO] Final Memory: 90M/1216M
[INFO] ------------------------------------------------------------------------
Waiting for Jenkins to finish collecting data
[ERROR] Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project carbondata-spark-common: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134) -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.scala-tools:maven-scala-plugin:2.15.2:compile (default) on project carbondata-spark-common: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
	at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
	at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
	at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
	at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
	at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
	at org.jvnet.hudson.maven3.launcher.Maven33Launcher.main(Maven33Launcher.java:129)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launchStandard(Launcher.java:330)
	at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:238)
	at jenkins.maven3.agent.Maven33Main.launch(Maven33Main.java:176)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at hudson.maven.Maven3Builder.call(Maven3Builder.java:139)
	at hudson.maven.Maven3Builder.call(Maven3Builder.java:70)
	at hudson.remoting.UserRequest.perform(UserRequest.java:153)
	at hudson.remoting.UserRequest.perform(UserRequest.java:50)
	at hudson.remoting.Request$2.run(Request.java:336)
	at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.maven.plugin.MojoExecutionException: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134)
	at org_scala_tools_maven.ScalaMojoSupport.execute(ScalaMojoSupport.java:350)
	at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
	at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
	... 31 more
Caused by: org.apache.commons.exec.ExecuteException: Process exited with an error: 134(Exit value: 134)
	at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecutor.java:346)
	at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:149)
	at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:136)
	at org_scala_tools_maven_executions.JavaMainCallerByFork.run(JavaMainCallerByFork.java:80)
	at org_scala_tools_maven.ScalaCompilerSupport.compile(ScalaCompilerSupport.java:124)
	at org_scala_tools_maven.ScalaCompilerSupport.doExecute(ScalaCompilerSupport.java:80)
	at org_scala_tools_maven.ScalaMojoSupport.execute(ScalaMojoSupport.java:342)
	... 33 more
[ERROR] 
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :carbondata-spark-common
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/processing/pom.xml> to org.apache.carbondata/carbondata-processing/1.2.0-SNAPSHOT/carbondata-processing-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/processing/target/carbondata-processing-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-processing/1.2.0-SNAPSHOT/carbondata-processing-1.2.0-SNAPSHOT.jar
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common/pom.xml> to org.apache.carbondata/carbondata-spark-common/1.2.0-SNAPSHOT/carbondata-spark-common-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/hive/pom.xml> to org.apache.carbondata/carbondata-hive/1.2.0-SNAPSHOT/carbondata-hive-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/common/pom.xml> to org.apache.carbondata/carbondata-common/1.2.0-SNAPSHOT/carbondata-common-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/common/target/carbondata-common-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-common/1.2.0-SNAPSHOT/carbondata-common-1.2.0-SNAPSHOT.jar
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/examples/spark2/pom.xml> to org.apache.carbondata/carbondata-examples-spark2/1.2.0-SNAPSHOT/carbondata-examples-spark2-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark2/pom.xml> to org.apache.carbondata/carbondata-spark2/1.2.0-SNAPSHOT/carbondata-spark2-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/assembly/pom.xml> to org.apache.carbondata/carbondata-assembly/1.2.0-SNAPSHOT/carbondata-assembly-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/pom.xml> to org.apache.carbondata/carbondata-parent/1.2.0-SNAPSHOT/carbondata-parent-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/examples/flink/pom.xml> to org.apache.carbondata/carbondata-examples-flink/1.2.0-SNAPSHOT/carbondata-examples-flink-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/core/pom.xml> to org.apache.carbondata/carbondata-core/1.2.0-SNAPSHOT/carbondata-core-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/core/target/carbondata-core-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-core/1.2.0-SNAPSHOT/carbondata-core-1.2.0-SNAPSHOT.jar
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/pom.xml> to org.apache.carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/hadoop/target/carbondata-hadoop-1.2.0-SNAPSHOT.jar> to org.apache.carbondata/carbondata-hadoop/1.2.0-SNAPSHOT/carbondata-hadoop-1.2.0-SNAPSHOT.jar
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/presto/pom.xml> to org.apache.carbondata/carbondata-presto/1.2.0-SNAPSHOT/carbondata-presto-1.2.0-SNAPSHOT.pom
[JENKINS] Archiving <https://builds.apache.org/job/carbondata-master-spark-2.1/ws/integration/spark-common-test/pom.xml> to org.apache.carbondata/carbondata-spark-common-test/1.2.0-SNAPSHOT/carbondata-spark-common-test-1.2.0-SNAPSHOT.pom
Sending e-mails to: commits@carbondata.apache.org
channel stopped

Jenkins build is back to stable : carbondata-master-spark-2.1 #400

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/400/display/redirect?page=changes>


Jenkins build is still unstable: carbondata-master-spark-2.1 #399

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/399/display/redirect?page=changes>


Jenkins build is still unstable: carbondata-master-spark-2.1 #398

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/398/display/redirect?page=changes>


Jenkins build is unstable: carbondata-master-spark-2.1 #397

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/397/display/redirect?page=changes>


Build failed in Jenkins: carbondata-master-spark-2.1 #396

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/carbondata-master-spark-2.1/396/display/redirect?page=changes>

Changes:

[jackylk] Convert decimal to byte at the end of sort step when using GLOBAL_SORT.

------------------------------------------
[...truncated 311.20 KB...]
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
	at org.scalatest.tools.Runner$.main(Runner.scala:860)
	at org.scalatest.tools.Runner.main(Runner.scala)
17/06/14 05:13:58 AUDIT LoadTable: [jenkins-ubuntu1][jenkins][Thread-1]Dataload failure for default.valid_max_columns_test. Please check the logs
- test for maxcolumns option value greater than threshold value for maxcolumns
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 ERROR LoadTable: ScalaTest-main-running-TestDataLoadWithColumnsMoreThanSchema 
java.lang.RuntimeException: csv headers should be less than the max columns: 14
	at scala.sys.package$.error(package.scala:27)
	at org.apache.carbondata.spark.util.CommonUtil$.validateMaxColumns(CommonUtil.scala:403)
	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:494)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
	at org.apache.spark.sql.test.Spark2TestQueryExecutor.sql(Spark2TestQueryExecutor.scala:32)
	at org.apache.spark.sql.common.util.QueryTest.sql(QueryTest.scala:84)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6$$anonfun$apply$mcV$sp$2.apply(TestDataLoadWithColumnsMoreThanSchema.scala:106)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6$$anonfun$apply$mcV$sp$2.apply(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6.apply$mcV$sp(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6.apply(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$6.apply(TestDataLoadWithColumnsMoreThanSchema.scala:96)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.sql.common.util.CarbonFunSuite.withFixture(CarbonFunSuite.scala:41)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.org$scalatest$BeforeAndAfterAll$$super$run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
	at org.scalatest.Suite$class.run(Suite.scala:1421)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
	at org.scalatest.tools.Runner$.main(Runner.scala:860)
	at org.scalatest.tools.Runner.main(Runner.scala)
17/06/14 05:13:58 AUDIT LoadTable: [jenkins-ubuntu1][jenkins][Thread-1]Dataload failure for default.boundary_max_columns_test. Please check the logs
- test for boundary value for maxcolumns
17/06/14 05:13:58 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [boundary_max_columns_test] under database [default]
17/06/14 05:13:58 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [boundary_max_columns_test] under database [default]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [boundary_max_columns_test]
17/06/14 05:13:58 ERROR LoadTable: ScalaTest-main-running-TestDataLoadWithColumnsMoreThanSchema 
java.lang.RuntimeException: csv headers should be less than the max columns: 13
	at scala.sys.package$.error(package.scala:27)
	at org.apache.carbondata.spark.util.CommonUtil$.validateMaxColumns(CommonUtil.scala:403)
	at org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:494)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
	at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:114)
	at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:135)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
	at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:87)
	at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:87)
	at org.apache.spark.sql.Dataset.<init>(Dataset.scala:185)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:64)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
	at org.apache.spark.sql.test.Spark2TestQueryExecutor.sql(Spark2TestQueryExecutor.scala:32)
	at org.apache.spark.sql.common.util.QueryTest.sql(QueryTest.scala:84)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7$$anonfun$apply$mcV$sp$3.apply(TestDataLoadWithColumnsMoreThanSchema.scala:122)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7$$anonfun$apply$mcV$sp$3.apply(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.scalatest.Assertions$class.intercept(Assertions.scala:997)
	at org.scalatest.FunSuite.intercept(FunSuite.scala:1555)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7.apply$mcV$sp(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7.apply(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema$$anonfun$7.apply(TestDataLoadWithColumnsMoreThanSchema.scala:113)
	at org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
	at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
	at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
	at org.scalatest.Transformer.apply(Transformer.scala:22)
	at org.scalatest.Transformer.apply(Transformer.scala:20)
	at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
	at org.apache.spark.sql.common.util.CarbonFunSuite.withFixture(CarbonFunSuite.scala:41)
	at org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
	at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
	at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
	at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
	at org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
	at org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
	at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
	at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
	at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
	at org.scalatest.Suite$class.run(Suite.scala:1424)
	at org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1555)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.FunSuiteLike$$anonfun$run$1.apply(FunSuiteLike.scala:212)
	at org.scalatest.SuperEngine.runImpl(Engine.scala:545)
	at org.scalatest.FunSuiteLike$class.run(FunSuiteLike.scala:212)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.org$scalatest$BeforeAndAfterAll$$super$run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:257)
	at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:256)
	at org.apache.carbondata.spark.testsuite.dataload.TestDataLoadWithColumnsMoreThanSchema.run(TestDataLoadWithColumnsMoreThanSchema.scala:29)
	at org.scalatest.Suite$class.callExecuteOnSuite$1(Suite.scala:1492)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1528)
	at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)
	at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
	at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
	at org.scalatest.Suite$class.runNestedSuites(Suite.scala:1526)
	at org.scalatest.tools.DiscoverySuite.runNestedSuites(DiscoverySuite.scala:29)
	at org.scalatest.Suite$class.run(Suite.scala:1421)
	at org.scalatest.tools.DiscoverySuite.run(DiscoverySuite.scala:29)
	at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2563)
	at org.scalatest.tools.Runner$$anonfun$doRunRunRunDaDoRunRun$3.apply(Runner.scala:2557)
	at scala.collection.immutable.List.foreach(List.scala:381)
	at org.scalatest.tools.Runner$.doRunRunRunDaDoRunRun(Runner.scala:2557)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1044)
	at org.scalatest.tools.Runner$$anonfun$runOptionallyWithPassFailReporter$2.apply(Runner.scala:1043)
	at org.scalatest.tools.Runner$.withClassLoaderAndDispatchReporter(Runner.scala:2722)
	at org.scalatest.tools.Runner$.runOptionallyWithPassFailReporter(Runner.scala:1043)
	at org.scalatest.tools.Runner$.main(Runner.scala:860)
	at org.scalatest.tools.Runner.main(Runner.scala)
17/06/14 05:13:58 AUDIT LoadTable: [jenkins-ubuntu1][jenkins][Thread-1]Dataload failure for default.boundary_max_columns_test. Please check the logs
- test for maxcolumns value less than columns in 1st line of csv file
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [smart_500_de]
17/06/14 05:13:58 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [smart_500_de]
17/06/14 05:14:03 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.smart_500_de
17/06/14 05:14:04 ERROR DataLoadExecutor: [Executor task launch worker-2][partitionID:default_smart_500_de_82dfe7b2-f518-4fda-9ae6-49b67e75a5f6] Data Load is partially success for table smart_500_de
17/06/14 05:14:04 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is partially successful for default.smart_500_de
- test for duplicate column name in the Fileheader options in load command
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [char_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [char_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [max_columns_value_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [max_columns_value_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [boundary_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [boundary_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [valid_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [valid_max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [max_columns_test] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [smart_500_de] under database [default]
17/06/14 05:14:04 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [smart_500_de] under database [default]
17/06/14 05:14:04 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [timestamptyenulldata]
TimestampDataTypeNullDataTest:
17/06/14 05:14:05 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [timestamptyenulldata]
17/06/14 05:14:05 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.timestamptyenulldata
17/06/14 05:14:05 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is successful for default.timestamptyenulldata
- SELECT max(dateField) FROM timestampTyeNullData where dateField is not null
- SELECT * FROM timestampTyeNullData where dateField is null
17/06/14 05:14:06 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleting table [timestamptyenulldata] under database [default]
17/06/14 05:14:06 AUDIT CarbonDropTableCommand: [jenkins-ubuntu1][jenkins][Thread-1]Deleted table [timestamptyenulldata] under database [default]
JoinWithoutDictionaryColumn:
17/06/14 05:14:07 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [mobile]
17/06/14 05:14:08 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [mobile]
17/06/14 05:14:08 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [emp]
17/06/14 05:14:13 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [emp]
17/06/14 05:14:13 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [mobile_d]
17/06/14 05:17:06 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [mobile_d]
17/06/14 05:17:06 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Creating Table with Database name [default] and Table name [emp_d]
17/06/14 05:17:06 AUDIT CreateTable: [jenkins-ubuntu1][jenkins][Thread-1]Table created with Database name [default] and Table name [emp_d]
17/06/14 05:17:06 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.mobile
17/06/14 05:17:07 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is successful for default.mobile
17/06/14 05:21:02 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load request has been received for table default.emp
17/06/14 05:38:29 AUDIT CarbonDataRDDFactory$: [jenkins-ubuntu1][jenkins][Thread-1]Data load is successful for default.emp
Sending e-mails to: commits@carbondata.apache.org
ERROR: Failed to parse POMs
java.io.IOException: Backing channel 'ubuntu-us1' is disconnected.
	at hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)
	at hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)
	at com.sun.proxy.$Proxy124.isAlive(Unknown Source)
	at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1043)
	at hudson.maven.ProcessCache$MavenProcess.call(ProcessCache.java:166)
	at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.doRun(MavenModuleSetBuild.java:873)
	at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)
	at hudson.model.Run.execute(Run.java:1728)
	at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:544)
	at hudson.model.ResourceController.execute(ResourceController.java:98)
	at hudson.model.Executor.run(Executor.java:405)
Caused by: hudson.remoting.Channel$OrderlyShutdown: hudson.remoting.ProxyException: java.util.concurrent.TimeoutException: Ping started at 1497442951336 hasn't completed by 1497443202839
	at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1129)
	at hudson.remoting.Channel$1.handle(Channel.java:527)
	at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:83)
Caused by: Command close created at
	at hudson.remoting.Command.<init>(Command.java:60)
	at hudson.remoting.Channel$CloseCommand.<init>(Channel.java:1123)
	at hudson.remoting.Channel$CloseCommand.<init>(Channel.java:1121)
	at hudson.remoting.Channel.close(Channel.java:1281)
	at hudson.slaves.ChannelPinger$1.onDead(ChannelPinger.java:180)
	at hudson.remoting.PingThread.ping(PingThread.java:130)
	at hudson.remoting.PingThread.run(PingThread.java:86)
Caused by: hudson.remoting.ProxyException: java.util.concurrent.TimeoutException: Ping started at 1497442951336 hasn't completed by 1497443202839
	... 2 more
ERROR: ubuntu-us1 is offline; cannot locate JDK 1.8 (latest)
ERROR: ubuntu-us1 is offline; cannot locate Maven 3.3.9