You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@tajo.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2014/08/06 05:20:34 UTC

Build failed in Jenkins: Tajo-0.8.1-nightly #137

See <https://builds.apache.org/job/Tajo-0.8.1-nightly/137/>

------------------------------------------
[...truncated 1349 lines...]
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 8 ms. row count = 1
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 34,044,142
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 80,031B for [age] INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,031B for [score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 10000 records.
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 10000
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 53,438,685
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col1] BOOLEAN: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col2] INT32: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col3] BINARY: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col4] INT32: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col5] INT32: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col6] INT64: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col7] FLOAT: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 4B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 26B for [col8] DOUBLE: 13 values, 9B raw, 9B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col9] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col10] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col11] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col13] BINARY: 13 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 13B raw, 1B comp}
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 13 records.
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:19:51 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 13
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 48,824,426
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 24B for [myboolean] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [mybit] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 28B for [mychar] BINARY: 1 values, 11B raw, 11B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [myint2] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [myint4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [myint8] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BITests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.063 sec
Running org.apache.tajo.storage.TestLazyTuple
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec
Running org.apache.tajo.storage.v2.TestStorages
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.368 sec
Running org.apache.tajo.storage.v2.TestCSVScanner
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.727 sec
Running org.apache.tajo.storage.v2.TestCSVCompression
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.219 sec
Running org.apache.tajo.storage.index.TestSingleCSVFileBSTIndex
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.859 sec
Running org.apache.tajo.storage.index.TestBSTIndex
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.695 sec
Running org.apache.tajo.storage.TestFileSystems
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec
Running org.apache.tajo.storage.TestStorageManager
Formatting using clusterid: testClusterID
Formatting using clusterid: testClusterID
Tests run: 3, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 2.194 sec <<< FAILURE!
testGetSplit(org.apache.tajo.storage.TestStorageManager)  Time elapsed: 1.692 sec  <<< ERROR!
java.net.UnknownHostException: asf901.ygridcore.net: asf901.ygridcore.net
	at java.net.InetAddress.getLocalHost(InetAddress.java:1402)
	at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:186)
	at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:206)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1746)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1218)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:684)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:351)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:332)
	at org.apache.tajo.storage.TestStorageManager.testGetSplit(TestStorageManager.java:110)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

testGetSplitWithBlockStorageLocationsBatching(org.apache.tajo.storage.TestStorageManager)  Time elapsed: 0.461 sec  <<< ERROR!
java.net.UnknownHostException: asf901.ygridcore.net: asf901.ygridcore.net
	at java.net.InetAddress.getLocalHost(InetAddress.java:1402)
	at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:186)
	at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:206)
	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1746)
	at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1218)
	at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:684)
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:351)
	at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:332)
	at org.apache.tajo.storage.TestStorageManager.testGetSplitWithBlockStorageLocationsBatching(TestStorageManager.java:165)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
	at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
	at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:597)
	at org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
	at org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
	at org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)

Running org.apache.tajo.storage.TestFrameTuple
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.001 sec
Running org.apache.tajo.storage.TestVTuple
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec
Running org.apache.tajo.storage.TestTupleComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0 sec
Running org.apache.tajo.storage.TestMergeScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.572 sec
T_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [myfloat4] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [myfloat8] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 32B for [mytext] BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 32B for [myblob] BINARY: 1 values, 15B raw, 15B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:19:58 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 1
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 51,131,316
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 24B for [col1] BOOLEAN: 1 values, 7B raw, 7B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col2] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 34B for [col3] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col4] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col5] INT32: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [col6] INT64: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 27B for [col7] FLOAT: 1 values, 10B raw, 10B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [col8] DOUBLE: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 34B for [col9] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 34B for [col10] BINARY: 1 values, 17B raw, 17B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [col11] BINARY: 1 values, 14B raw, 14B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 0 ms. row count = 1
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 34,044,142
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 80,031B for [age] INT64: 10,000 values, 80,008B raw, 80,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,031B for [score] FLOAT: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 10000 records.
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:19:59 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 10000
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 36,271,037
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [file] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [name] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 10B raw, 1B comp}
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [age] INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordWriter: Flushing mem store to file. allocated memory: 36,271,037
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 40,031B for [id] INT32: 10,000 values, 40,008B raw, 40,008B comp, 1 pages, encodings: [RLE, BIT_PACKED, PLAIN]
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [file] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 11B raw, 1B comp}
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [name] BINARY: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 10B raw, 1B comp}
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ColumnChunkPageWriteStore: written 31B for [age] INT64: 10,000 values, 12B raw, 12B comp, 1 pages, encodings: [RLE, PLAIN_DICTIONARY, BIT_PACKED], dic { 1 entries, 8B raw, 1B comp}
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 10000 records.
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 0 ms. row count = 10000
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 10000 records.
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 6, 2014 3:20:25 AM INFO: parquet.hadoop.InternalParquetRecordReader: block read in memory in 0 ms. row count = 10000

Results :

Tests in error: 
  testGetSplit(org.apache.tajo.storage.TestStorageManager): asf901.ygridcore.net: asf901.ygridcore.net
  testGetSplitWithBlockStorageLocationsBatching(org.apache.tajo.storage.TestStorageManager): asf901.ygridcore.net: asf901.ygridcore.net

Tests run: 149, Failures: 0, Errors: 2, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  6.873 s]
[INFO] Tajo Project POM .................................. SUCCESS [  0.739 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.626 s]
[INFO] Tajo Common ....................................... SUCCESS [  6.584 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  1.193 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  5.285 s]
[INFO] Tajo Rpc .......................................... SUCCESS [ 21.250 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.049 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [  5.501 s]
[INFO] Tajo Storage ...................................... FAILURE [ 41.110 s]
[INFO] Tajo Yarn PullServer .............................. SKIPPED
[INFO] Tajo Client ....................................... SKIPPED
[INFO] Tajo JDBC Driver .................................. SKIPPED
[INFO] Tajo Catalog Drivers HCatalog ..................... SKIPPED
[INFO] Tajo Core Backend ................................. SKIPPED
[INFO] Tajo Catalog Drivers .............................. SKIPPED
[INFO] Tajo Catalog ...................................... SKIPPED
[INFO] Tajo Distribution ................................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:33 min
[INFO] Finished at: 2014-08-06T03:20:26+00:00
[INFO] Final Memory: 51M/1234M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.12.4:test (default-test) on project tajo-storage: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Tajo-0.8.1-nightly/ws/tajo-storage/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-storage
Build step 'Execute shell' marked build as failure
Recording test results

Jenkins build is back to normal : Tajo-0.8.1-nightly #138

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Tajo-0.8.1-nightly/138/>