You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@tajo.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2016/03/04 08:32:39 UTC

Build failed in Jenkins: Tajo-master-build #1098

See <https://builds.apache.org/job/Tajo-master-build/1098/changes>

Changes:

[jihoonson] TAJO-2082: Aggregation on a derived table which includes union can cause

------------------------------------------
[...truncated 1824 lines...]
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 28
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [id] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [age] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [score] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:30 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 48
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col1] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 1 records.
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: block read in memory in 0 ms. row count = 1
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 1 records.
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: block read in memory in 0 ms. row count = 1
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 0 records.
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 1 records.
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: block read in memory in 0 ms. row count = 1
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 10000 records.
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: block read in memory in 0 ms. row count = 10000
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 12 records.
Mar 04, 2016 7:32:31 AM org.apache.parquet.Log info
INFO: block read in memory in 2 ms. row count = 12
org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col2] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col3] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col5] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 28
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col1] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [col2] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col3] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 0
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 18
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 65B for [col1] BINARY: 1 values, 20B raw, 20B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 280,000
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 80,055B for [age] INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 40,047B for [score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 66,794
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [col1] BOOLEAN: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col2] BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col3] INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col4] INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col5] INT64: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col6] FLOAT: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col7] DOUBLE: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col8] BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 49B for [col9] BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col10] BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [col12] BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 13B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 66,794
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 36B for [col1] BOOLEAN: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col2] BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col3] INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col4] INT32: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PATests run: 180, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 10.314 sec <<< FAILURE! - in org.apache.tajo.storage.TestStorages
testNullHandlingTypesWithProjection[3](org.apache.tajo.storage.TestStorages)  Time elapsed: 0.05 sec  <<< ERROR!
org.apache.parquet.schema.InvalidSchemaException: A group type can not be empty. Parquet does not support empty group without leaves. Empty group: table_schema
	at org.apache.parquet.schema.GroupType.<init>(GroupType.java:92)
	at org.apache.parquet.schema.GroupType.<init>(GroupType.java:48)
	at org.apache.parquet.schema.MessageType.<init>(MessageType.java:50)
	at org.apache.tajo.storage.parquet.TajoSchemaConverter.convert(TajoSchemaConverter.java:151)
	at org.apache.tajo.storage.parquet.TajoReadSupport.init(TajoReadSupport.java:77)
	at org.apache.tajo.storage.thirdparty.parquet.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:170)
	at org.apache.tajo.storage.thirdparty.parquet.ParquetReader.initReader(ParquetReader.java:161)
	at org.apache.tajo.storage.thirdparty.parquet.ParquetReader.read(ParquetReader.java:137)
	at org.apache.tajo.storage.parquet.ParquetScanner.next(ParquetScanner.java:91)
	at org.apache.tajo.storage.TestStorages.testNullHandlingTypesWithProjection(TestStorages.java:653)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.junit.runners.Suite.runChild(Suite.java:128)
	at org.junit.runners.Suite.runChild(Suite.java:27)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:344)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:269)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:240)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:184)
	at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:286)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:240)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)

Running org.apache.tajo.storage.TestFileSystems
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec - in org.apache.tajo.storage.TestFileSystems
Running org.apache.tajo.storage.avro.TestAvroUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.12 sec - in org.apache.tajo.storage.avro.TestAvroUtil
Running org.apache.tajo.storage.TestCompressionStorages
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.553 sec - in org.apache.tajo.storage.TestCompressionStorages
Running org.apache.tajo.storage.parquet.TestReadWrite
Mar 04, 2016 7:32:37 AM org.apache.parquet.Log info
INFO: RecordReader initialized will read a total of 1 records.
Mar 04, 2016 7:32:37 AM org.apache.parquet.Log info
INFO: block read in memory in 1 ms. row count = 1
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.063 sec - in org.apache.tajo.storage.parquet.TestReadWrite
Running org.apache.tajo.storage.parquet.TestSchemaConverter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.004 sec - in org.apache.tajo.storage.parquet.TestSchemaConverter
Running org.apache.tajo.storage.TestFileTablespace
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.192 sec - in org.apache.tajo.storage.TestFileTablespace
Running org.apache.tajo.storage.TestDelimitedTextFile
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.111 sec - in org.apache.tajo.storage.TestDelimitedTextFile
Running org.apache.tajo.storage.TestByteBufLineReader
Formatting using clusterid: testClusterID
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.229 sec - in org.apache.tajo.storage.TestByteBufLineReader
Running org.apache.tajo.storage.TestLineReader
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec - in org.apache.tajo.storage.TestLineReader
CKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col5] INT64: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 42B for [col6] FLOAT: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 4B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [col7] DOUBLE: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 48B for [col8] BINARY: 12 values, 9B raw, 9B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 49B for [col9] BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 11B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [col10] BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 8B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 53B for [col12] BINARY: 12 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN_DICTIONARY], dic { 1 entries, 13B raw, 1B comp}
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:31 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 65,659
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 34B for [myboolean] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [mybit] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 38B for [mychar] BINARY: 1 values, 11B raw, 11B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myint2] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myint4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [myint8] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [myfloat4] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 55B for [myfloat8] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [mytext] BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 50B for [myblob] BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [BIT_PACKED, RLE, PLAIN]
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Mar 4, 2016 7:32:37 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5

Results :

Tests in error: 
  TestStorages.testNullHandlingTypesWithProjection:653 ยป InvalidSchema A group t...

Tests run: 264, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.720 s]
[INFO] Tajo Project POM .................................. SUCCESS [  1.264 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.817 s]
[INFO] Tajo Common ....................................... SUCCESS [ 29.682 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  2.601 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  5.202 s]
[INFO] Tajo Plan ......................................... SUCCESS [  6.973 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.167 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [02:24 min]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.352 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [ 10.571 s]
[INFO] Tajo Storage Common ............................... SUCCESS [  2.943 s]
[INFO] Tajo HDFS Storage ................................. FAILURE [ 56.565 s]
[INFO] Tajo PullServer ................................... SKIPPED
[INFO] Tajo Client ....................................... SKIPPED
[INFO] Tajo CLI tools .................................... SKIPPED
[INFO] Tajo SQL Parser ................................... SKIPPED
[INFO] ASM (thirdparty) .................................. SKIPPED
[INFO] Tajo RESTful Container ............................ SKIPPED
[INFO] Tajo Metrics ...................................... SKIPPED
[INFO] Tajo Core ......................................... SKIPPED
[INFO] Tajo RPC .......................................... SKIPPED
[INFO] Tajo Catalog Drivers Hive ......................... SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Client Example ............................... SKIPPED
[INFO] Tajo HBase Storage ................................ SKIPPED
[INFO] Tajo Cluster Tests ................................ SKIPPED
[INFO] Tajo JDBC Driver .................................. SKIPPED
[INFO] Tajo JDBC storage common .......................... SKIPPED
[INFO] Tajo PostgreSQL JDBC storage ...................... SKIPPED
[INFO] Tajo S3 storage ................................... SKIPPED
[INFO] Tajo Storage ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] Tajo Core Tests ................................... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:28 min
[INFO] Finished at: 2016-03-04T07:32:44+00:00
[INFO] Final Memory: 104M/1489M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.19:test (default-test) on project tajo-storage-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Tajo-master-build/ws/tajo-storage/tajo-storage-hdfs/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-storage-hdfs
Build step 'Execute shell' marked build as failure
Updating TAJO-2082

Jenkins build is back to normal : Tajo-master-build #1099

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Tajo-master-build/1099/changes>