You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@tajo.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2015/08/24 07:16:37 UTC

Build failed in Jenkins: Tajo-master-nightly #806

See <https://builds.apache.org/job/Tajo-master-nightly/806/changes>

Changes:

[jhkim] TAJO-1799: Fix incorrect event handler when kill-query failed.

------------------------------------------
[...truncated 727940 lines...]
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 1
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 1
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 1
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 1
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 1 records.
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 24, 2015 5:02:41 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 1
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 212
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 43B for [l_orderkey] INT32: 5 values, 10B raw, 10B comp, 1 pages, encodings: [PLAIN_DICTIONARY, BIT_PACKED, RLE], dic { 3 entries, 12B raw, 3B comp}
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for [l_shipdate] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE]
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.ColumnChunkPageWriteStore: written 123B for [l_shipdate_function] BINARY: 5 values, 76B raw, 76B comp, 1 pages, encodings: [PLAIN, BIT_PACKED, RLE]
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: reading another 1 footers
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.ParquetFileReader: Initiating action with parallelism: 5
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: RecordReader initialized will read a total of 5 records.
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: at row 0. reading next block
Aug 24, 2015 5:02:44 AM INFO: org.apache.parquet.hadoop.InternalParquetRecordReader: block read in memory in 1 ms. row count = 5
2015-08-24 05:17:06,002 INFO: org.mortbay.log (info(67)) - Shutdown hook executing
2015-08-24 05:17:06,002 INFO: org.mortbay.log (info(67)) - Shutdown hook complete
2015-08-24 05:17:06,003 INFO: org.apache.tajo.util.history.HistoryWriter (run(268)) - HistoryWriter_asf909.gq1.ygridcore.net_32279 stopped.
2015-08-24 05:17:06,002 INFO: org.apache.tajo.master.TajoMaster (run(538)) - ============================================
2015-08-24 05:17:06,004 INFO: org.apache.tajo.master.TajoMaster (run(539)) - TajoMaster received SIGINT Signal
2015-08-24 05:17:06,004 INFO: org.apache.tajo.master.TajoMaster (run(540)) - ============================================
2015-08-24 05:17:06,004 INFO: org.apache.tajo.util.history.HistoryCleaner (run(136)) - History cleaner stopped
2015-08-24 05:17:06,005 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (Tajo-REST) listened on 0:0:0:0:0:0:0:0:32278) shutdown
2015-08-24 05:17:06,006 INFO: org.apache.tajo.worker.NodeStatusUpdater (serviceStop(111)) - NodeStatusUpdater stopped.
2015-08-24 05:17:06,006 INFO: org.apache.tajo.ws.rs.TajoRestService (serviceStop(129)) - Tajo Rest Service stopped.
2015-08-24 05:17:06,006 INFO: org.apache.tajo.worker.NodeStatusUpdater (run(262)) - Heartbeat Thread stopped.
2015-08-24 05:17:06,007 INFO: org.apache.tajo.catalog.CatalogServer (serviceStop(178)) - Catalog Server (127.0.0.1:32275) shutdown
2015-08-24 05:17:06,007 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (CatalogProtocol) listened on 127.0.0.1:32275) shutdown
2015-08-24 05:17:06,008 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (QueryMasterProtocol) listened on 0:0:0:0:0:0:0:0:32281) shutdown
2015-08-24 05:17:06,008 INFO: org.apache.tajo.querymaster.QueryMasterManagerService (serviceStop(106)) - QueryMasterManagerService stopped
2015-08-24 05:17:06,009 INFO: org.apache.tajo.querymaster.QueryMaster (run(417)) - QueryMaster heartbeat thread stopped
2015-08-24 05:17:06,013 INFO: org.apache.tajo.util.history.HistoryWriter (run(268)) - HistoryWriter_127.0.0.1_32277 stopped.
2015-08-24 05:17:06,016 INFO: org.apache.tajo.querymaster.QueryMaster (serviceStop(168)) - QueryMaster stopped
2015-08-24 05:17:06,016 INFO: org.apache.tajo.worker.TajoWorkerClientService (stop(99)) - TajoWorkerClientService stopping
2015-08-24 05:17:06,017 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (QueryMasterClientProtocol) listened on 0:0:0:0:0:0:0:0:32280) shutdown
2015-08-24 05:17:06,017 INFO: org.apache.tajo.worker.TajoWorkerClientService (stop(103)) - TajoWorkerClientService stopped
2015-08-24 05:17:06,017 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (TajoWorkerProtocol) listened on 0:0:0:0:0:0:0:0:32279) shutdown
2015-08-24 05:17:06,017 INFO: BlockStateChange (processAndHandleReportedBlock(3171)) - BLOCK* addBlock: block blk_1073748629_7805 on node 127.0.0.1:47549 size 134217728 does not belong to any file
2015-08-24 05:17:06,017 INFO: org.apache.tajo.worker.TajoWorkerManagerService (serviceStop(93)) - TajoWorkerManagerService stopped
2015-08-24 05:17:06,018 INFO: BlockStateChange (add(115)) - BLOCK* InvalidateBlocks: add blk_1073748629_7805 to 127.0.0.1:47549
2015-08-24 05:17:06,019 INFO: org.apache.tajo.worker.TajoWorker (serviceStop(377)) - TajoWorker main thread exiting
2015-08-24 05:17:06,019 INFO: BlockStateChange (logAddStoredBlock(2624)) - BLOCK* addStoredBlock: blockMap updated: 127.0.0.1:47549 is added to blk_1073741859_1035{UCState=COMMITTED, truncateBlock=null, primaryNodeIndex=-1, replicas=[ReplicaUC[[DISK]DS-9a637e43-f8dd-4eee-8e00-e22e4fc3097f:NORMAL:127.0.0.1:47549|RBW]]} size 3247307
2015-08-24 05:17:06,022 WARN: org.apache.hadoop.hdfs.DFSClient (flushOrSync(2025)) - Unable to persist blocks in hflush for /tajo/system/ha/active/127.0.0.1_43771
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tajo/system/ha/active/127.0.0.1_43771 (inode 29638): File does not exist. Holder DFSClient_NONMAPREDUCE_-1700921754_1 does not have any open files.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3433)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3998)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1210)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:903)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy32.fsync(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.fsync(ClientNamenodeProtocolTranslatorPB.java:838)
	at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy33.fsync(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy73.fsync(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy73.fsync(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2022)
	at org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:1898)
	at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:139)
	at org.apache.tajo.ha.HdfsServiceTracker.createMasterFile(HdfsServiceTracker.java:244)
	at org.apache.tajo.ha.HdfsServiceTracker.register(HdfsServiceTracker.java:155)
	at org.apache.tajo.ha.HdfsServiceTracker$PingChecker.run(HdfsServiceTracker.java:374)
	at java.lang.Thread.run(Thread.java:724)
2015-08-24 05:17:06,022 WARN: org.apache.hadoop.hdfs.DFSClient (flushOrSync(2047)) - Error while syncing
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /tajo/system/ha/active/127.0.0.1_43771 (inode 29638): File does not exist. Holder DFSClient_NONMAPREDUCE_-1700921754_1 does not have any open files.
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3433)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.fsync(FSNamesystem.java:3998)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.fsync(NameNodeRpcServer.java:1210)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.fsync(ClientNamenodeProtocolServerSideTranslatorPB.java:903)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

	at org.apache.hadoop.ipc.Client.call(Client.java:1476)
	at org.apache.hadoop.ipc.Client.call(Client.java:1407)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
	at com.sun.proxy.$Proxy32.fsync(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.fsync(ClientNamenodeProtocolTranslatorPB.java:838)
	at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
	at com.sun.proxy.$Proxy33.fsync(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy73.fsync(Unknown Source)
	at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279)
	at com.sun.proxy.$Proxy73.fsync(Unknown Source)
	at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2022)
	at org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:1898)
	at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:139)
	at org.apache.tajo.ha.HdfsServiceTracker.createMasterFile(HdfsServiceTracker.java:244)
	at org.apache.tajo.ha.HdfsServiceTracker.register(HdfsServiceTracker.java:155)
	at org.apache.tajo.ha.HdfsServiceTracker$PingChecker.run(HdfsServiceTracker.java:374)
	at java.lang.Thread.run(Thread.java:724)
2015-08-24 05:17:06,022 WARN: org.apache.tajo.rpc.NettyClientBase (doReconnect(198)) - Exception [org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:32276)]: ClosedChannelException:  Try to reconnect : /127.0.0.1:32276
2015-08-24 05:17:06,023 WARN: org.apache.hadoop.hdfs.DFSClient (closeResponder(612)) - Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1260)
	at java.lang.Thread.join(Thread.java:1334)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:610)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:578)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:574)
2015-08-24 05:17:06,025 ERROR: org.apache.hadoop.hdfs.server.datanode.DataNode (run(278)) - 127.0.0.1:47549:DataXceiver error processing WRITE_BLOCK operation  src: /127.0.0.1:42671 dst: /127.0.0.1:47549
java.io.IOException: Premature EOF from inputStream
	at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
	at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:472)
	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:849)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:804)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
	at java.lang.Thread.run(Thread.java:724)
2015-08-24 05:17:06,419 INFO: org.apache.tajo.util.history.HistoryCleaner (run(136)) - History cleaner stopped
2015-08-24 05:17:06,420 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (QueryCoordinatorProtocol) listened on 127.0.0.1:32277) shutdown
2015-08-24 05:17:06,421 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (TajoMasterClientProtocol) listened on 127.0.0.1:32276) shutdown
2015-08-24 05:17:06,424 INFO: org.apache.tajo.rpc.NettyServerBase (shutdown(173)) - Rpc (TajoResourceTrackerProtocol) listened on 127.0.0.1:32274) shutdown
2015-08-24 05:17:06,424 INFO: org.apache.tajo.master.TajoMaster (serviceStop(406)) - Tajo Master main thread exiting
2015-08-24 05:17:06,971 INFO: BlockStateChange (invalidateWorkForOneNode(3488)) - BLOCK* BlockManager: ask 127.0.0.1:47549 to delete [blk_1073748626_7802, blk_1073748627_7803, blk_1073748628_7804, blk_1073748629_7805]
2015-08-24 05:17:07,024 WARN: org.apache.tajo.rpc.NettyClientBase (doReconnect(198)) - Exception [org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:32276)]: ClosedChannelException:  Try to reconnect : /127.0.0.1:32276
2015-08-24 05:17:08,031 WARN: org.apache.tajo.rpc.NettyClientBase (doReconnect(198)) - Exception [org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:32276)]: ConnectException: Connection refused: /127.0.0.1:32276 Try to reconnect : /127.0.0.1:32276
2015-08-24 05:17:09,033 WARN: org.apache.tajo.rpc.NettyClientBase (doReconnect(198)) - Exception [org.apache.tajo.ipc.TajoMasterClientProtocol(/127.0.0.1:32276)]: ConnectException: Connection refused: /127.0.0.1:32276 Try to reconnect : /127.0.0.1:32276

Results :

Tests in error: 
  TestHAServiceHDFSImpl.testAutoFailOver:82->verifyDataBaseAndTable:152 ยป TajoInternal

Tests run: 1689, Failures: 0, Errors: 1, Skipped: 0

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Tajo Main ......................................... SUCCESS [  1.853 s]
[INFO] Tajo Project POM .................................. SUCCESS [  1.737 s]
[INFO] Tajo Maven Plugins ................................ SUCCESS [  2.983 s]
[INFO] Tajo Common ....................................... SUCCESS [ 32.049 s]
[INFO] Tajo Algebra ...................................... SUCCESS [  2.727 s]
[INFO] Tajo Catalog Common ............................... SUCCESS [  4.882 s]
[INFO] Tajo Plan ......................................... SUCCESS [  6.362 s]
[INFO] Tajo Rpc Common ................................... SUCCESS [  1.375 s]
[INFO] Tajo Protocol Buffer Rpc .......................... SUCCESS [ 48.831 s]
[INFO] Tajo Catalog Client ............................... SUCCESS [  1.333 s]
[INFO] Tajo Catalog Server ............................... SUCCESS [01:17 min]
[INFO] Tajo Storage Common ............................... SUCCESS [ 10.130 s]
[INFO] Tajo HDFS Storage ................................. SUCCESS [01:03 min]
[INFO] Tajo PullServer ................................... SUCCESS [  1.285 s]
[INFO] Tajo Client ....................................... SUCCESS [  2.415 s]
[INFO] Tajo CLI tools .................................... SUCCESS [  2.434 s]
[INFO] ASM (thirdparty) .................................. SUCCESS [  1.784 s]
[INFO] Tajo RESTful Container ............................ SUCCESS [  4.043 s]
[INFO] Tajo Metrics ...................................... SUCCESS [  1.388 s]
[INFO] Tajo Core ......................................... SUCCESS [ 10.773 s]
[INFO] Tajo RPC .......................................... SUCCESS [  1.041 s]
[INFO] Tajo Catalog Drivers Hive ......................... SUCCESS [ 28.285 s]
[INFO] Tajo Catalog Drivers .............................. SUCCESS [  0.115 s]
[INFO] Tajo Catalog ...................................... SUCCESS [  1.036 s]
[INFO] Tajo Client Example ............................... SUCCESS [  1.096 s]
[INFO] Tajo HBase Storage ................................ SUCCESS [  4.777 s]
[INFO] Tajo Cluster Tests ................................ SUCCESS [  2.726 s]
[INFO] Tajo JDBC Driver .................................. SUCCESS [ 28.743 s]
[INFO] Tajo Storage ...................................... SUCCESS [  1.010 s]
[INFO] Tajo Distribution ................................. SUCCESS [  5.390 s]
[INFO] Tajo Core Tests ................................... FAILURE [22:33 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 28:27 min
[INFO] Finished at: 2015-08-24T05:17:10+00:00
[INFO] Final Memory: 78M/476M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project tajo-core-tests: There are test failures.
[ERROR] 
[ERROR] Please refer to <https://builds.apache.org/job/Tajo-master-nightly/ws/tajo-core-tests/target/surefire-reports> for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :tajo-core-tests
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Tajo-master-nightly #805
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 58875789 bytes
Compression is 0.0%
Took 23 sec
Recording test results