You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pig.apache.org by Apache Jenkins Server <je...@builds.apache.org> on 2011/07/19 00:33:13 UTC

Build failed in Jenkins: Pig-trunk #1047

See <https://builds.apache.org/job/Pig-trunk/1047/changes>

Changes:

[dvryaboy] PIG-2143: Make PigStorage optionally store schema; improve docs.

[thejas] PIG-1973: UDFContext.getUDFContext usage of ThreadLocal pattern
 is not typical (woody via thejas)

[thejas] PIG-2053: PigInputFormat uses class.isAssignableFrom() where instanceof is more appropriate (woody via thejas)

------------------------------------------
[...truncated 39691 lines...]
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 11/07/18 22:32:59 ERROR hdfs.DFSClient: Exception closing file /tmp/TestStore-output-109919389985994422.txt_cleanupOnFailure_succeeded2 : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-109919389985994422.txt_cleanupOnFailure_succeeded2 by DFSClient_-18740787
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-109919389985994422.txt_cleanupOnFailure_succeeded2 by DFSClient_-18740787
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniCluster.shutdownMiniDfsAndMrClusters(MiniCluster.java:111)
    [junit] 	at org.apache.pig.test.MiniCluster.shutDown(MiniCluster.java:101)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 3
    [junit] 11/07/18 22:32:59 INFO ipc.Server: Stopping server on 46625
    [junit] 11/07/18 22:32:59 INFO ipc.Server: Stopping IPC Server listener on 46625
    [junit] 11/07/18 22:32:59 INFO ipc.Server: IPC Server handler 0 on 46625: exiting
    [junit] 11/07/18 22:32:59 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/18 22:32:59 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:53227, storageID=DS-1691353808-127.0.1.1-53227-1311028017697, infoPort=44166, ipcPort=46625):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/18 22:32:59 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:32:59 INFO ipc.Server: IPC Server handler 2 on 46625: exiting
    [junit] 11/07/18 22:32:59 INFO ipc.Server: IPC Server handler 1 on 46625: exiting
    [junit] 11/07/18 22:32:59 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/18 22:32:59 INFO datanode.DataNode: Deleting block blk_2370046613579184429_1123 file build/test/data/dfs/data/data2/current/blk_2370046613579184429
    [junit] 11/07/18 22:32:59 INFO datanode.DataNode: Deleting block blk_3770838622731599099_1122 file build/test/data/dfs/data/data1/current/blk_3770838622731599099
    [junit] 11/07/18 22:33:00 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/18 22:33:00 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:53227, storageID=DS-1691353808-127.0.1.1-53227-1311028017697, infoPort=44166, ipcPort=46625):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data7/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data8/current'}>
    [junit] 11/07/18 22:33:00 INFO ipc.Server: Stopping server on 46625
    [junit] 11/07/18 22:33:00 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 2
    [junit] 11/07/18 22:33:00 INFO ipc.Server: Stopping server on 33246
    [junit] 11/07/18 22:33:00 INFO ipc.Server: IPC Server handler 0 on 33246: exiting
    [junit] 11/07/18 22:33:00 INFO ipc.Server: Stopping IPC Server listener on 33246
    [junit] 11/07/18 22:33:00 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/18 22:33:00 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:00 INFO ipc.Server: IPC Server handler 1 on 33246: exiting
    [junit] 11/07/18 22:33:00 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:59632, storageID=DS-567738828-127.0.1.1-59632-1311028017388, infoPort=52830, ipcPort=33246):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/18 22:33:00 INFO ipc.Server: IPC Server handler 2 on 33246: exiting
    [junit] 11/07/18 22:33:01 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/18 22:33:01 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/18 22:33:01 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:59632, storageID=DS-567738828-127.0.1.1-59632-1311028017388, infoPort=52830, ipcPort=33246):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data5/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data6/current'}>
    [junit] 11/07/18 22:33:01 INFO ipc.Server: Stopping server on 33246
    [junit] 11/07/18 22:33:01 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 11/07/18 22:33:01 INFO ipc.Server: Stopping server on 49039
    [junit] 11/07/18 22:33:01 INFO ipc.Server: IPC Server handler 1 on 49039: exiting
    [junit] 11/07/18 22:33:01 INFO ipc.Server: IPC Server handler 2 on 49039: exiting
    [junit] 11/07/18 22:33:01 INFO ipc.Server: Stopping IPC Server listener on 49039
    [junit] 11/07/18 22:33:01 INFO ipc.Server: IPC Server handler 0 on 49039: exiting
    [junit] 11/07/18 22:33:01 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:01 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/18 22:33:01 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:36770, storageID=DS-443723784-127.0.1.1-36770-1311028017086, infoPort=52584, ipcPort=49039):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/18 22:33:01 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:60346 to delete  blk_5757342234126463601_1126 blk_-2021053641489301832_1121
    [junit] 11/07/18 22:33:01 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:53227 to delete  blk_4672728988766360436_1127 blk_5757342234126463601_1126
    [junit] 11/07/18 22:33:02 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/18 22:33:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/18 22:33:02 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:36770, storageID=DS-443723784-127.0.1.1-36770-1311028017086, infoPort=52584, ipcPort=49039):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data4/current'}>
    [junit] 11/07/18 22:33:02 INFO ipc.Server: Stopping server on 49039
    [junit] 11/07/18 22:33:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 11/07/18 22:33:02 INFO ipc.Server: Stopping server on 55267
    [junit] 11/07/18 22:33:02 INFO ipc.Server: IPC Server handler 0 on 55267: exiting
    [junit] 11/07/18 22:33:02 INFO ipc.Server: Stopping IPC Server listener on 55267
    [junit] 11/07/18 22:33:02 INFO ipc.Server: IPC Server handler 2 on 55267: exiting
    [junit] 11/07/18 22:33:02 INFO ipc.Server: IPC Server handler 1 on 55267: exiting
    [junit] 11/07/18 22:33:02 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/18 22:33:02 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:60346, storageID=DS-1066259841-127.0.1.1-60346-1311028016749, infoPort=50115, ipcPort=55267):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/18 22:33:02 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/18 22:33:02 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:60346, storageID=DS-1066259841-127.0.1.1-60346-1311028016749, infoPort=50115, ipcPort=55267):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current'}>
    [junit] 11/07/18 22:33:02 INFO ipc.Server: Stopping server on 55267
    [junit] 11/07/18 22:33:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/18 22:33:03 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/18 22:33:03 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/07/18 22:33:03 INFO namenode.FSNamesystem: Number of transactions: 694 Total time for transactions(ms): 13Number of transactions batched in Syncs: 114 Number of syncs: 484 SyncTimes(ms): 3538 305 
    [junit] 11/07/18 22:33:03 INFO namenode.DecommissionManager: Interrupted Monitor
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping server on 38439
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 0 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 6 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 2 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 5 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server listener on 38439
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 3 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Shutting down: Map-events fetcher for all reduce tasks on tracker_host0.foo.com:localhost/127.0.0.1:45013
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 8 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 9 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 7 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping server on 45013
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 2 on 45013: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 4 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server listener on 45013
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 1 on 38439: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Shutting down StatusHttpServer
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 0 on 45013: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 1 on 45013: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 3 on 45013: exiting
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Interrupted. Closing down.
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Shutting down: Map-events fetcher for all reduce tasks on tracker_host1.foo.com:localhost/127.0.0.1:42601
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping server on 42601
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 0 on 42601: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 1 on 42601: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 2 on 42601: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 3 on 42601: exiting
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Shutting down StatusHttpServer
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server listener on 42601
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Interrupted. Closing down.
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Shutting down: Map-events fetcher for all reduce tasks on tracker_host2.foo.com:localhost/127.0.0.1:50586
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping server on 50586
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 0 on 50586: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server listener on 50586
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 2 on 50586: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 3 on 50586: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: IPC Server handler 1 on 50586: exiting
    [junit] 11/07/18 22:33:03 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:03 INFO mapred.TaskTracker: Shutting down StatusHttpServer
    [junit] 11/07/18 22:33:04 INFO mapred.TaskTracker: Interrupted. Closing down.
    [junit] 11/07/18 22:33:04 INFO mapred.TaskTracker: Shutting down: Map-events fetcher for all reduce tasks on tracker_host3.foo.com:localhost/127.0.0.1:47009
    [junit] 11/07/18 22:33:04 INFO ipc.Server: Stopping server on 47009
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 0 on 47009: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 1 on 47009: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: Stopping IPC Server listener on 47009
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 2 on 47009: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 3 on 47009: exiting
    [junit] 11/07/18 22:33:04 INFO mapred.TaskTracker: Shutting down StatusHttpServer
    [junit] 11/07/18 22:33:04 INFO mapred.TaskTracker: Interrupted. Closing down.
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: Stopping infoServer
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: Stopping interTrackerServer
    [junit] 11/07/18 22:33:04 INFO ipc.Server: Stopping server on 39145
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 0 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 1 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 2 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 5 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 7 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 3 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: Stopping IPC Server listener on 39145
    [junit] 11/07/18 22:33:04 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 8 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 4 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 6 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: Stopped interTrackerServer
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: Stopping expireTrackers
    [junit] 11/07/18 22:33:04 INFO ipc.Server: IPC Server handler 9 on 39145: exiting
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: Stopping retirer
    [junit] 11/07/18 22:33:04 INFO mapred.EagerTaskInitializationListener: Stopping Job Init Manager thread
    [junit] 11/07/18 22:33:04 INFO mapred.EagerTaskInitializationListener: JobInitManagerThread interrupted.
    [junit] 11/07/18 22:33:04 INFO mapred.EagerTaskInitializationListener: Shutting down thread pool
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: Stopping expireLaunchingTasks
    [junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 364.321 sec
    [junit] 11/07/18 22:33:04 INFO mapred.JobTracker: stopped all jobtracker services
    [junit] Running org.apache.pig.test.TestStringUDFs
    [junit] 11/07/18 22:33:04 WARN builtin.SUBSTRING: java.lang.NullPointerException
    [junit] 11/07/18 22:33:04 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/18 22:33:04 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
    [junit] 11/07/18 22:33:04 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -8
    [junit] 11/07/18 22:33:04 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/18 22:33:04 WARN builtin.INDEXOF: Failed to process input; error - null
    [junit] 11/07/18 22:33:04 WARN builtin.LAST_INDEX_OF: Failed to process input; error - null
    [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 0.122 sec
   [delete] Deleting directory /tmp/pig_junit_tmp1501241339

BUILD FAILED
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:652: The following error occurred while executing this line:
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:707: Tests failed!

Total time: 21 minutes 10 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording test results
Publishing Javadoc
Archiving artifacts
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure


Jenkins build is back to normal : Pig-trunk #1053

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Pig-trunk/1053/changes>



Build failed in Jenkins: Pig-trunk #1052

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Pig-trunk/1052/changes>

Changes:

[daijy] PIG-2125: Make Pig work with hadoop .NEXT (remove empty files)

[thejas] PIG-2179: tests in TestLoad are failing

------------------------------------------
[...truncated 39880 lines...]
    [junit] 11/07/22 01:36:56 INFO mapred.TaskTracker: In TaskLauncher, current free slots : 2 and trying to launch attempt_20110722005425547_0013_m_000001_0
    [junit] 11/07/22 01:37:06 INFO mapred.JvmManager: In JvmRunner constructed JVM ID: jvm_20110722005425547_0013_m_1310160433
    [junit] 11/07/22 01:37:06 INFO mapred.JvmManager: JVM Runner jvm_20110722005425547_0013_m_1310160433 spawned.
    [junit] 11/07/22 01:37:17 INFO mapred.TaskTracker: JVM with ID: jvm_20110722005425547_0013_m_1310160433 given task: attempt_20110722005425547_0013_m_000001_0
    [junit] 11/07/22 01:37:17 INFO mapred.TaskTracker: attempt_20110722005425547_0013_m_000001_0 0.0% 
    [junit] 11/07/22 01:37:17 INFO namenode.FSNamesystem: Number of transactions: 605 Total time for transactions(ms): 16Number of transactions batched in Syncs: 96 Number of syncs: 419 SyncTimes(ms): 1307 235 
    [junit] 11/07/22 01:37:17 INFO FSNamesystem.audit: ugi=hudson,hudson	ip=/127.0.0.1	cmd=delete	src=/tmp/TestStore-output--1293221001145632294.txt_1/_temporary	dst=null	perm=null
    [junit] 11/07/22 01:37:17 INFO mapred.TaskTracker: attempt_20110722005425547_0013_m_000001_0 0.0% cleanup
    [junit] 11/07/22 01:37:17 INFO mapred.TaskTracker: Task attempt_20110722005425547_0013_m_000001_0 is done.
    [junit] 11/07/22 01:37:17 INFO mapred.TaskTracker: reported output size for attempt_20110722005425547_0013_m_000001_0  was 0
    [junit] 11/07/22 01:37:17 INFO mapred.TaskTracker: addFreeSlot : current free slots : 2
    [junit] 11/07/22 01:37:17 INFO mapred.JvmManager: JVM : jvm_20110722005425547_0013_m_1310160433 exited. Number of tasks it ran: 1
    [junit] 11/07/22 01:37:18 INFO mapred.TaskTracker: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find taskTracker/jobcache/job_20110722005425547_0013/attempt_20110722005425547_0013_m_000001_0/output/file.out in any of the configured local directories
    [junit] 11/07/22 01:37:18 INFO mapred.JobInProgress: Task 'attempt_20110722005425547_0013_m_000001_0' has completed task_20110722005425547_0013_m_000001 successfully.
    [junit] 11/07/22 01:37:18 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/TestStore-output--1293221001145632294.txt_1/_logs/history/localhost_1311296065575_job_20110722005425547_0013_hudson_PigLatin%3ADefaultJobName. blk_-1117289598324678566_1112
    [junit] 11/07/22 01:37:18 INFO datanode.DataNode: Receiving block blk_-1117289598324678566_1112 src: /127.0.0.1:52033 dest: /127.0.0.1:51016
    [junit] 11/07/22 01:37:18 INFO datanode.DataNode: Receiving block blk_-1117289598324678566_1112 src: /127.0.0.1:47405 dest: /127.0.0.1:41558
    [junit] 11/07/22 01:37:18 INFO datanode.DataNode: Receiving block blk_-1117289598324678566_1112 src: /127.0.0.1:47208 dest: /127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO DataNode.clienttrace: src: /127.0.0.1:47208, dest: /127.0.0.1:42511, bytes: 6925, op: HDFS_WRITE, cliID: DFSClient_-1650800025, srvID: DS-687389278-127.0.1.1-42511-1311296055429, blockid: blk_-1117289598324678566_1112
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: PacketResponder 0 for block blk_-1117289598324678566_1112 terminating
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:42511 is added to blk_-1117289598324678566_1112 size 6925
    [junit] 11/07/22 01:37:22 INFO DataNode.clienttrace: src: /127.0.0.1:47405, dest: /127.0.0.1:41558, bytes: 6925, op: HDFS_WRITE, cliID: DFSClient_-1650800025, srvID: DS-448864010-127.0.1.1-41558-1311296045100, blockid: blk_-1117289598324678566_1112
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: PacketResponder 1 for block blk_-1117289598324678566_1112 terminating
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:41558 is added to blk_-1117289598324678566_1112 size 6925
    [junit] 11/07/22 01:37:22 INFO DataNode.clienttrace: src: /127.0.0.1:52033, dest: /127.0.0.1:51016, bytes: 6925, op: HDFS_WRITE, cliID: DFSClient_-1650800025, srvID: DS-2126937689-127.0.1.1-51016-1311296050405, blockid: blk_-1117289598324678566_1112
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: PacketResponder 2 for block blk_-1117289598324678566_1112 terminating
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:51016 is added to blk_-1117289598324678566_1112 size 6925
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/TestStore-output--1293221001145632294.txt_1/_logs/history/localhost_1311296065575_job_20110722005425547_0013_hudson_PigLatin%3ADefaultJobName is closed by DFSClient_-1650800025
    [junit] 11/07/22 01:37:22 INFO mapred.JobTracker: Removed completed task 'attempt_20110722005425547_0013_m_000000_0' from 'tracker_host3.foo.com:localhost/127.0.0.1:58112'
    [junit] 11/07/22 01:37:22 INFO mapred.JobTracker: Removed completed task 'attempt_20110722005425547_0013_m_000001_0' from 'tracker_host3.foo.com:localhost/127.0.0.1:58112'
    [junit] 11/07/22 01:37:22 INFO mapred.JobTracker: Removed completed task 'attempt_20110722005425547_0013_m_000002_0' from 'tracker_host3.foo.com:localhost/127.0.0.1:58112'
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_1247146579982373405 is added to invalidSet of 127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_1247146579982373405 is added to invalidSet of 127.0.0.1:44560
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_1247146579982373405 is added to invalidSet of 127.0.0.1:41558
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_6858688704163705742 is added to invalidSet of 127.0.0.1:44560
    [junit] 11/07/22 01:37:22 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20110722005425547_0013
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_6858688704163705742 is added to invalidSet of 127.0.0.1:42511
    [junit] 11/07/22 01:37:22 WARN mapred.TaskTracker: Unknown job job_20110722005425547_0013 being deleted.
    [junit] 11/07/22 01:37:22 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20110722005425547_0013
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_6858688704163705742 is added to invalidSet of 127.0.0.1:41558
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_133115624376423742 is added to invalidSet of 127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20110722005425547_0013
    [junit] 11/07/22 01:37:22 INFO mapred.TaskTracker: Received 'KillJobAction' for job: job_20110722005425547_0013
    [junit] 11/07/22 01:37:22 INFO mapred.TaskRunner: attempt_20110722005425547_0013_m_000001_0 done; removing files.
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_133115624376423742 is added to invalidSet of 127.0.0.1:44560
    [junit] 11/07/22 01:37:22 WARN mapred.TaskTracker: Unknown job job_20110722005425547_0013 being deleted.
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_133115624376423742 is added to invalidSet of 127.0.0.1:51016
    [junit] 11/07/22 01:37:22 WARN mapred.TaskTracker: Unknown job job_20110722005425547_0013 being deleted.
    [junit] 11/07/22 01:37:22 INFO mapred.IndexCache: Map ID attempt_20110722005425547_0013_m_000001_0 not found in cache
    [junit] 11/07/22 01:37:22 INFO FSNamesystem.audit: ugi=hudson,hudson	ip=/127.0.0.1	cmd=delete	src=/tmp/hadoop-hudson/mapred/system/job_20110722005425547_0013	dst=null	perm=null
    [junit] 11/07/22 01:37:22 INFO mapred.JobTracker: Killing job job_20110722005425547_0013
    [junit] 11/07/22 01:37:22 INFO mapReduceLayer.MapReduceLauncher: job job_20110722005425547_0013 has failed! Stop running all dependent jobs
    [junit] 11/07/22 01:37:22 INFO mapReduceLayer.MapReduceLauncher: 100% complete
    [junit] 11/07/22 01:37:22 WARN mapReduceLayer.Launcher: There is no log file to write to.
    [junit] 11/07/22 01:37:22 ERROR mapReduceLayer.Launcher: Backend error message
    [junit] org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught error from UDF: org.apache.pig.test.TestStore$FailUDF [FailUDFException]
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:242)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:305)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.getNext(PhysicalOperator.java:322)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.processPlan(POForEach.java:332)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNext(POForEach.java:284)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:267)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:262)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
    [junit] 	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
    [junit] 	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
    [junit] 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    [junit] 	at org.apache.hadoop.mapred.Child.main(Child.java:170)
    [junit] Caused by: java.io.IOException: FailUDFException
    [junit] 	at org.apache.pig.test.TestStore$FailUDF.exec(TestStore.java:690)
    [junit] 	at org.apache.pig.test.TestStore$FailUDF.exec(TestStore.java:686)
    [junit] 	at org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.getNext(POUserFunc.java:216)
    [junit] 	... 11 more
    [junit] 
    [junit] 11/07/22 01:37:22 ERROR pigstats.SimplePigStats: ERROR 2078: Caught error from UDF: org.apache.pig.test.TestStore$FailUDF [FailUDFException]
    [junit] 11/07/22 01:37:22 ERROR pigstats.PigStatsUtil: 1 map reduce job(s) failed!
    [junit] 11/07/22 01:37:22 INFO pigstats.SimplePigStats: Script Statistics: 
    [junit] 
    [junit] HadoopVersion	PigVersion	UserId	StartedAt	FinishedAt	Features
    [junit] 0.20.2	2011-07-21_22-41-03	hudson	2011-07-22 01:35:46	2011-07-22 01:37:22	UNKNOWN
    [junit] 
    [junit] Failed!
    [junit] 
    [junit] Failed Jobs:
    [junit] JobId	Alias	Feature	Message	Outputs
    [junit] job_20110722005425547_0013	a,b	MAP_ONLY	Message: Job failed!	/tmp/TestStore-output--1293221001145632294.txt_1,
    [junit] 
    [junit] Input(s):
    [junit] Failed to read data from "/tmp/TestStore-1673645843841222728.txt"
    [junit] 
    [junit] Output(s):
    [junit] Failed to produce result in "/tmp/TestStore-output--1293221001145632294.txt_1"
    [junit] 
    [junit] Counters:
    [junit] Total records written : 0
    [junit] Total bytes written : 0
    [junit] Spillable Memory Manager spill count : 0
    [junit] Total bags proactively spilled: 0
    [junit] Total records proactively spilled: 0
    [junit] 
    [junit] Job DAG:
    [junit] job_20110722005425547_0013
    [junit] 
    [junit] 
    [junit] 11/07/22 01:37:22 INFO mapReduceLayer.MapReduceLauncher: Failed!
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_7167232418050835033 is added to invalidSet of 127.0.0.1:51016
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_7167232418050835033 is added to invalidSet of 127.0.0.1:41558
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_7167232418050835033 is added to invalidSet of 127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-1117289598324678566 is added to invalidSet of 127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-1117289598324678566 is added to invalidSet of 127.0.0.1:41558
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-1117289598324678566 is added to invalidSet of 127.0.0.1:51016
    [junit] 11/07/22 01:37:22 INFO FSNamesystem.audit: ugi=hudson,hudson	ip=/127.0.0.1	cmd=delete	src=/tmp/TestStore-output--1293221001145632294.txt_1	dst=null	perm=null
    [junit] 11/07/22 01:37:22 INFO executionengine.HExecutionEngine: Connecting to hadoop file system at: hdfs://localhost:43661
    [junit] 11/07/22 01:37:22 INFO executionengine.HExecutionEngine: Connecting to map-reduce job tracker at: localhost:59099
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-4519427391306123443 is added to invalidSet of 127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-4519427391306123443 is added to invalidSet of 127.0.0.1:41558
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addToInvalidates: blk_-4519427391306123443 is added to invalidSet of 127.0.0.1:51016
    [junit] 11/07/22 01:37:22 INFO FSNamesystem.audit: ugi=hudson,hudson	ip=/127.0.0.1	cmd=delete	src=/tmp/TestStore-1673645843841222728.txt	dst=null	perm=null
    [junit] 11/07/22 01:37:22 INFO FSNamesystem.audit: ugi=hudson,hudson	ip=/127.0.0.1	cmd=create	src=/tmp/TestStore-1673645843841222728.txt	dst=null	perm=hudson:supergroup:rw-r--r--
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.allocateBlock: /tmp/TestStore-1673645843841222728.txt. blk_-9077278668398096392_1113
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: Receiving block blk_-9077278668398096392_1113 src: /127.0.0.1:59413 dest: /127.0.0.1:44560
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: Receiving block blk_-9077278668398096392_1113 src: /127.0.0.1:47409 dest: /127.0.0.1:41558
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: Receiving block blk_-9077278668398096392_1113 src: /127.0.0.1:47212 dest: /127.0.0.1:42511
    [junit] 11/07/22 01:37:22 INFO DataNode.clienttrace: src: /127.0.0.1:47212, dest: /127.0.0.1:42511, bytes: 31, op: HDFS_WRITE, cliID: DFSClient_-1650800025, srvID: DS-687389278-127.0.1.1-42511-1311296055429, blockid: blk_-9077278668398096392_1113
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: PacketResponder 0 for block blk_-9077278668398096392_1113 terminating
    [junit] 11/07/22 01:37:22 INFO DataNode.clienttrace: src: /127.0.0.1:47409, dest: /127.0.0.1:41558, bytes: 31, op: HDFS_WRITE, cliID: DFSClient_-1650800025, srvID: DS-448864010-127.0.1.1-41558-1311296045100, blockid: blk_-9077278668398096392_1113
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:42511 is added to blk_-9077278668398096392_1113 size 31
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: PacketResponder 1 for block blk_-9077278668398096392_1113 terminating
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:41558 is added to blk_-9077278668398096392_1113 size 31
    [junit] 11/07/22 01:37:22 INFO DataNode.clienttrace: src: /127.0.0.1:59413, dest: /127.0.0.1:44560, bytes: 31, op: HDFS_WRITE, cliID: DFSClient_-1650800025, srvID: DS-1632131043-127.0.1.1-44560-1311296065481, blockid: blk_-9077278668398096392_1113
    [junit] 11/07/22 01:37:22 INFO datanode.DataNode: PacketResponder 2 for block blk_-9077278668398096392_1113 terminating
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: BLOCK* NameSystem.addStoredBlock: blockMap updated: 127.0.0.1:44560 is added to blk_-9077278668398096392_1113 size 31
    [junit] 11/07/22 01:37:22 INFO hdfs.StateChange: DIR* NameSystem.completeFile: file /tmp/TestStore-1673645843841222728.txt is closed by DFSClient_-1650800025
    [junit] 11/07/22 01:37:23 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:44560 to delete  blk_1247146579982373405_1107 blk_133115624376423742_1109 blk_6858688704163705742_1108
    [junit] 11/07/22 01:37:23 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:51016 to delete  blk_-4519427391306123443_1106 blk_-1117289598324678566_1112 blk_133115624376423742_1109 blk_7167232418050835033_1111
    [junit] 11/07/22 01:37:25 INFO datanode.DataNode: Deleting block blk_133115624376423742_1109 file build/test/data/dfs/data/data7/current/blk_133115624376423742
    [junit] 11/07/22 01:37:25 INFO datanode.DataNode: Deleting block blk_1247146579982373405_1107 file build/test/data/dfs/data/data7/current/blk_1247146579982373405
    [junit] 11/07/22 01:37:25 INFO datanode.DataNode: Deleting block blk_6858688704163705742_1108 file build/test/data/dfs/data/data8/current/blk_6858688704163705742
    [junit] 11/07/22 01:37:26 INFO datanode.DataNode: Deleting block blk_-4519427391306123443_1106 file build/test/data/dfs/data/data3/current/blk_-4519427391306123443
    [junit] 11/07/22 01:37:26 INFO datanode.DataNode: Deleting block blk_-1117289598324678566_1112 file build/test/data/dfs/data/data4/current/blk_-1117289598324678566
    [junit] 11/07/22 01:37:26 INFO datanode.DataNode: Deleting block blk_133115624376423742_1109 file build/test/data/dfs/data/data4/current/blk_133115624376423742
    [junit] 11/07/22 01:37:26 INFO datanode.DataNode: Deleting block blk_7167232418050835033_1111 file build/test/data/dfs/data/data3/current/blk_7167232418050835033
    [junit] 11/07/22 01:37:26 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:41558 to delete  blk_-4519427391306123443_1106 blk_1247146579982373405_1107 blk_-1117289598324678566_1112 blk_7167232418050835033_1111 blk_6858688704163705742_1108
    [junit] 11/07/22 01:37:26 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:42511 to delete  blk_-4519427391306123443_1106 blk_1247146579982373405_1107 blk_-1117289598324678566_1112 blk_133115624376423742_1109 blk_7167232418050835033_1111 blk_6858688704163705742_1108
    [junit] 11/07/22 01:37:27 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:27 INFO datanode.DataNode: Deleting block blk_-4519427391306123443_1106 file build/test/data/dfs/data/data1/current/blk_-4519427391306123443
    [junit] 11/07/22 01:37:27 INFO datanode.DataNode: Deleting block blk_-1117289598324678566_1112 file build/test/data/dfs/data/data1/current/blk_-1117289598324678566
    [junit] 11/07/22 01:37:27 INFO datanode.DataNode: Deleting block blk_1247146579982373405_1107 file build/test/data/dfs/data/data2/current/blk_1247146579982373405
    [junit] 11/07/22 01:37:27 INFO datanode.DataNode: Deleting block blk_6858688704163705742_1108 file build/test/data/dfs/data/data1/current/blk_6858688704163705742
    [junit] 11/07/22 01:37:27 INFO datanode.DataNode: Deleting block blk_7167232418050835033_1111 file build/test/data/dfs/data/data2/current/blk_7167232418050835033
    [junit] 11/07/22 01:37:27 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:28 INFO datanode.DataNode: Deleting block blk_-4519427391306123443_1106 file build/test/data/dfs/data/data6/current/blk_-4519427391306123443
    [junit] 11/07/22 01:37:28 INFO datanode.DataNode: Deleting block blk_-1117289598324678566_1112 file build/test/data/dfs/data/data5/current/blk_-1117289598324678566
    [junit] 11/07/22 01:37:28 INFO datanode.DataNode: Deleting block blk_133115624376423742_1109 file build/test/data/dfs/data/data5/current/blk_133115624376423742
    [junit] 11/07/22 01:37:28 WARN datanode.DataNode: Unexpected error trying to delete block blk_1247146579982373405_1107. BlockInfo not found in volumeMap.
    [junit] 11/07/22 01:37:28 INFO datanode.DataNode: Deleting block blk_6858688704163705742_1108 file build/test/data/dfs/data/data6/current/blk_6858688704163705742
    [junit] 11/07/22 01:37:28 INFO datanode.DataNode: Deleting block blk_7167232418050835033_1111 file build/test/data/dfs/data/data6/current/blk_7167232418050835033
    [junit] 11/07/22 01:37:28 WARN datanode.DataNode: Error processing datanode Command
    [junit] java.io.IOException: Error in deleting blocks.
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1361)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:868)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:830)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:710)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/22 01:37:32 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:32 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:32 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:37 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:37 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:42 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:42 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:42 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:47 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:47 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:47 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:52 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:52 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:52 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:57 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:57 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:37:57 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:02 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:02 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:02 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:02 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:07 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:07 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:12 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN pig.PigServer: bytearray is implicitly cast to chararray under EqualExpression Operator
    [junit] 11/07/22 01:38:12 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:12 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:17 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:17 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:17 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:22 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:22 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:22 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:27 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:27 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:27 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:32 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:32 INFO pigstats.ScriptState: Pig features used in the script: FILTER
    [junit] 11/07/22 01:38:32 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:32 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:37 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:37 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:37 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:42 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:42 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:42 WARN builtin.JsonMetadata: Could not find schema file for /tmp/TestStore-1673645843841222728.txt
    [junit] 11/07/22 01:38:45 WARN hdfs.StateChange: DIR* NameSystem.completeFile: failed to complete /tmp/TestStore-output--167234160945274
    [junit] Running org.apache.pig.test.TestStore
    [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
    [junit] Test org.apache.pig.test.TestStore FAILED (timeout)
    [junit] Running org.apache.pig.test.TestStringUDFs
    [junit] 11/07/22 01:38:48 WARN builtin.SUBSTRING: java.lang.NullPointerException
    [junit] 11/07/22 01:38:48 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/22 01:38:48 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
    [junit] 11/07/22 01:38:48 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -8
    [junit] 11/07/22 01:38:48 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/22 01:38:48 WARN builtin.INDEXOF: Failed to process input; error - null
    [junit] 11/07/22 01:38:48 WARN builtin.LAST_INDEX_OF: Failed to process input; error - null
    [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 0.119 sec
   [delete] Deleting directory /tmp/pig_junit_tmp873928744

BUILD FAILED
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:664: The following error occurred while executing this line:
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:719: Tests failed!

Total time: 164 minutes 20 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording test results
Publishing Javadoc
Archiving artifacts
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure


Build failed in Jenkins: Pig-trunk #1051

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Pig-trunk/1051/changes>

Changes:

[thejas] fix import in testcase added as part of - PIG-2146: POStore.getSchema() returns null because of which PigOutputCommitter
 is not storing schema while cleanup

[daijy] PIG-2125: Make Pig work with hadoop .NEXT (PIG-2125-5.patch)

[thejas] PIG-2146: POStore.getSchema() returns null because of which PigOutputCommitter
 is not storing schema while cleanup

------------------------------------------
[...truncated 39219 lines...]
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 11/07/21 10:35:24 ERROR hdfs.DFSClient: Exception closing file /tmp/TestStore-output--8328214239024647269.txt_cleanupOnFailure_succeeded1 : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output--8328214239024647269.txt_cleanupOnFailure_succeeded1 by DFSClient_1223368398
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output--8328214239024647269.txt_cleanupOnFailure_succeeded1 by DFSClient_1223368398
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] 11/07/21 10:35:24 WARN hdfs.StateChange: DIR* NameSystem.completeFile: failed to complete /tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2 because dir.getFileBlocks() is null  and pendingFile is null
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 5 on 38626, call complete(/tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2, DFSClient_1223368398) from 127.0.0.1:52485: error: java.io.IOException: Could not complete write to file /tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2 by DFSClient_1223368398
    [junit] java.io.IOException: Could not complete write to file /tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2 by DFSClient_1223368398
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 11/07/21 10:35:24 ERROR hdfs.DFSClient: Exception closing file /tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2 : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2 by DFSClient_1223368398
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-7223443780415400558.txt_cleanupOnFailure_succeeded2 by DFSClient_1223368398
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 50680
    [junit] Shutting down DataNode 3
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 0 on 50680: exiting
    [junit] 11/07/21 10:35:24 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:53215, storageID=DS-194588715-127.0.1.1-53215-1311244155820, infoPort=48709, ipcPort=50680):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 1 on 50680: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server listener on 50680
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 2 on 50680: exiting
    [junit] 11/07/21 10:35:24 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:53215, storageID=DS-194588715-127.0.1.1-53215-1311244155820, infoPort=48709, ipcPort=50680):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data7/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data8/current'}>
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 50680
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 2
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 51781
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 0 on 51781: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 1 on 51781: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server listener on 51781
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/21 10:35:24 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:47869, storageID=DS-1551150481-127.0.1.1-47869-1311244155523, infoPort=59919, ipcPort=51781):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 2 on 51781: exiting
    [junit] 11/07/21 10:35:24 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:47869, storageID=DS-1551150481-127.0.1.1-47869-1311244155523, infoPort=59919, ipcPort=51781):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data5/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data6/current'}>
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 51781
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 54285
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 0 on 54285: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 2 on 54285: exiting
    [junit] 11/07/21 10:35:24 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:39809, storageID=DS-1638254488-127.0.1.1-39809-1311244155216, infoPort=60200, ipcPort=54285):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server listener on 54285
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 1 on 54285: exiting
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/21 10:35:24 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:39809, storageID=DS-1638254488-127.0.1.1-39809-1311244155216, infoPort=60200, ipcPort=54285):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data4/current'}>
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 54285
    [junit] Shutting down DataNode 0
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 46492
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 0 on 46492: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 2 on 46492: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server listener on 46492
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 1 on 46492: exiting
    [junit] 11/07/21 10:35:24 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:39336, storageID=DS-1732004218-127.0.1.1-39336-1311244154900, infoPort=43295, ipcPort=46492):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/21 10:35:24 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:39336, storageID=DS-1732004218-127.0.1.1-39336-1311244154900, infoPort=43295, ipcPort=46492):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current'}>
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 46492
    [junit] 11/07/21 10:35:24 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/21 10:35:24 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/07/21 10:35:24 INFO namenode.FSNamesystem: Number of transactions: 694 Total time for transactions(ms): 10Number of transactions batched in Syncs: 114 Number of syncs: 484 SyncTimes(ms): 5117 528 
    [junit] 11/07/21 10:35:24 INFO namenode.DecommissionManager: Interrupted Monitor
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping server on 38626
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server listener on 38626
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 2 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 1 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 6 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 3 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 9 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 4 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 0 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 8 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 7 on 38626: exiting
    [junit] 11/07/21 10:35:24 INFO ipc.Server: IPC Server handler 5 on 38626: exiting
    [junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 366.33 sec
    [junit] 11/07/21 10:35:24 INFO ipc.Server: Stopping IPC Server Responder
    [junit] Running org.apache.pig.test.TestStringUDFs
    [junit] 11/07/21 10:35:25 WARN builtin.SUBSTRING: java.lang.NullPointerException
    [junit] 11/07/21 10:35:25 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/21 10:35:25 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
    [junit] 11/07/21 10:35:25 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -8
    [junit] 11/07/21 10:35:25 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/21 10:35:25 WARN builtin.INDEXOF: Failed to process input; error - null
    [junit] 11/07/21 10:35:25 WARN builtin.LAST_INDEX_OF: Failed to process input; error - null
    [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 0.116 sec
   [delete] Deleting directory /tmp/pig_junit_tmp474614293

BUILD FAILED
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:664: The following error occurred while executing this line:
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:719: Tests failed!

Total time: 21 minutes 16 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording test results
Publishing Javadoc
Archiving artifacts
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure


Build failed in Jenkins: Pig-trunk #1050

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Pig-trunk/1050/changes>

Changes:

[daijy] PIG-2175: Switch Pig wiki to use confluence

[daijy] PIG-2027: NPE if Pig don't have permission for log file

------------------------------------------
[...truncated 39312 lines...]
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-6378947281920216709.txt_cleanupOnFailure_succeeded2 by DFSClient_1722705116
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] 11/07/20 22:33:58 WARN hdfs.StateChange: DIR* NameSystem.completeFile: failed to complete /tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded because dir.getFileBlocks() is null  and pendingFile is null
    [junit] 11/07/20 22:33:58 INFO ipc.Server: IPC Server handler 0 on 51139, call complete(/tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded, DFSClient_1722705116) from 127.0.0.1:32947: error: java.io.IOException: Could not complete write to file /tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded by DFSClient_1722705116
    [junit] java.io.IOException: Could not complete write to file /tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded by DFSClient_1722705116
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 11/07/20 22:33:58 ERROR hdfs.DFSClient: Exception closing file /tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded by DFSClient_1722705116
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-7340733076336181835.txt_cleanupOnFailure_succeeded by DFSClient_1722705116
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 3
    [junit] 11/07/20 22:33:58 INFO ipc.Server: Stopping server on 38187
    [junit] 11/07/20 22:33:58 INFO ipc.Server: IPC Server handler 1 on 38187: exiting
    [junit] 11/07/20 22:33:58 INFO ipc.Server: IPC Server handler 0 on 38187: exiting
    [junit] 11/07/20 22:33:58 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/20 22:33:58 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/20 22:33:58 INFO ipc.Server: Stopping IPC Server listener on 38187
    [junit] 11/07/20 22:33:58 INFO ipc.Server: IPC Server handler 2 on 38187: exiting
    [junit] 11/07/20 22:33:58 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:45458, storageID=DS-1048577633-127.0.1.1-45458-1311200871662, infoPort=40022, ipcPort=38187):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/20 22:33:59 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/20 22:33:59 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/20 22:33:59 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:45458, storageID=DS-1048577633-127.0.1.1-45458-1311200871662, infoPort=40022, ipcPort=38187):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data7/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data8/current'}>
    [junit] 11/07/20 22:33:59 INFO ipc.Server: Stopping server on 38187
    [junit] 11/07/20 22:33:59 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 2
    [junit] 11/07/20 22:33:59 INFO ipc.Server: Stopping server on 59058
    [junit] 11/07/20 22:33:59 INFO ipc.Server: IPC Server handler 0 on 59058: exiting
    [junit] 11/07/20 22:33:59 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/20 22:33:59 INFO ipc.Server: Stopping IPC Server listener on 59058
    [junit] 11/07/20 22:33:59 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/20 22:33:59 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:57222, storageID=DS-770166947-127.0.1.1-57222-1311200871358, infoPort=50166, ipcPort=59058):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/20 22:33:59 INFO ipc.Server: IPC Server handler 2 on 59058: exiting
    [junit] 11/07/20 22:33:59 INFO ipc.Server: IPC Server handler 1 on 59058: exiting
    [junit] 11/07/20 22:33:59 INFO datanode.DataNode: Deleting block blk_-3522264909465784854_1123 file build/test/data/dfs/data/data2/current/blk_-3522264909465784854
    [junit] 11/07/20 22:33:59 INFO datanode.DataNode: Deleting block blk_-819386376063049456_1122 file build/test/data/dfs/data/data1/current/blk_-819386376063049456
    [junit] 11/07/20 22:34:00 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/20 22:34:00 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:57222, storageID=DS-770166947-127.0.1.1-57222-1311200871358, infoPort=50166, ipcPort=59058):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data5/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data6/current'}>
    [junit] 11/07/20 22:34:00 INFO ipc.Server: Stopping server on 59058
    [junit] 11/07/20 22:34:00 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 11/07/20 22:34:00 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/20 22:34:00 INFO ipc.Server: Stopping server on 48784
    [junit] 11/07/20 22:34:00 INFO ipc.Server: IPC Server handler 0 on 48784: exiting
    [junit] 11/07/20 22:34:00 INFO ipc.Server: IPC Server handler 1 on 48784: exiting
    [junit] 11/07/20 22:34:00 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/20 22:34:00 INFO ipc.Server: Stopping IPC Server listener on 48784
    [junit] 11/07/20 22:34:00 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/20 22:34:00 INFO ipc.Server: IPC Server handler 2 on 48784: exiting
    [junit] 11/07/20 22:34:00 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:49251, storageID=DS-91189809-127.0.1.1-49251-1311200871055, infoPort=40367, ipcPort=48784):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/20 22:34:01 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/20 22:34:01 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:45458 to delete  blk_-3959651314966766985_1121
    [junit] 11/07/20 22:34:01 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:42981 to delete  blk_-6254939454443058954_1127 blk_-2411028607433159088_1126
    [junit] 11/07/20 22:34:01 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/20 22:34:01 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:49251, storageID=DS-91189809-127.0.1.1-49251-1311200871055, infoPort=40367, ipcPort=48784):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data4/current'}>
    [junit] 11/07/20 22:34:01 INFO ipc.Server: Stopping server on 48784
    [junit] 11/07/20 22:34:01 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 11/07/20 22:34:01 INFO ipc.Server: Stopping server on 40040
    [junit] 11/07/20 22:34:01 INFO ipc.Server: IPC Server handler 0 on 40040: exiting
    [junit] 11/07/20 22:34:01 INFO ipc.Server: IPC Server handler 2 on 40040: exiting
    [junit] 11/07/20 22:34:01 INFO ipc.Server: Stopping IPC Server listener on 40040
    [junit] 11/07/20 22:34:01 INFO ipc.Server: IPC Server handler 1 on 40040: exiting
    [junit] 11/07/20 22:34:01 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/20 22:34:01 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/20 22:34:01 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:42981, storageID=DS-439444866-127.0.1.1-42981-1311200870728, infoPort=55880, ipcPort=40040):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/20 22:34:02 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/20 22:34:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/20 22:34:02 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:42981, storageID=DS-439444866-127.0.1.1-42981-1311200870728, infoPort=55880, ipcPort=40040):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current'}>
    [junit] 11/07/20 22:34:02 INFO ipc.Server: Stopping server on 40040
    [junit] 11/07/20 22:34:02 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/20 22:34:02 INFO namenode.DecommissionManager: Interrupted Monitor
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/20 22:34:02 INFO namenode.FSNamesystem: Number of transactions: 694 Total time for transactions(ms): 12Number of transactions batched in Syncs: 114 Number of syncs: 484 SyncTimes(ms): 2546 325 
    [junit] 11/07/20 22:34:02 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/07/20 22:34:02 INFO ipc.Server: Stopping server on 51139
    [junit] 11/07/20 22:34:02 INFO ipc.Server: Stopping IPC Server listener on 51139
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 1 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 6 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 2 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 9 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 4 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 3 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 0 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 7 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 5 on 51139: exiting
    [junit] 11/07/20 22:34:02 INFO ipc.Server: IPC Server handler 8 on 51139: exiting
    [junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 368.971 sec
    [junit] 11/07/20 22:34:02 INFO ipc.Server: Stopping IPC Server Responder
    [junit] Running org.apache.pig.test.TestStringUDFs
    [junit] 11/07/20 22:34:03 WARN builtin.SUBSTRING: java.lang.NullPointerException
    [junit] 11/07/20 22:34:03 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/20 22:34:03 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
    [junit] 11/07/20 22:34:03 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -8
    [junit] 11/07/20 22:34:03 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/20 22:34:03 WARN builtin.INDEXOF: Failed to process input; error - null
    [junit] 11/07/20 22:34:03 WARN builtin.LAST_INDEX_OF: Failed to process input; error - null
    [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 0.116 sec
   [delete] Deleting directory /tmp/pig_junit_tmp498103579

BUILD FAILED
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:664: The following error occurred while executing this line:
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:719: Tests failed!

Total time: 21 minutes 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording test results
Publishing Javadoc
Archiving artifacts
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure


Build failed in Jenkins: Pig-trunk #1049

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Pig-trunk/1049/changes>

Changes:

[thejas] PIG-2171: TestScriptLanguage is broken on trunk (daijy and thejas)

------------------------------------------
[...truncated 37878 lines...]
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] 11/07/19 22:30:27 WARN hdfs.StateChange: DIR* NameSystem.completeFile: failed to complete /tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1 because dir.getFileBlocks() is null  and pendingFile is null
    [junit] 11/07/19 22:30:27 INFO ipc.Server: IPC Server handler 1 on 36580, call complete(/tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1, DFSClient_148109060) from 127.0.0.1:60534: error: java.io.IOException: Could not complete write to file /tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1 by DFSClient_148109060
    [junit] java.io.IOException: Could not complete write to file /tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1 by DFSClient_148109060
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 11/07/19 22:30:27 ERROR hdfs.DFSClient: Exception closing file /tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1 : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1 by DFSClient_148109060
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-8545234101387032655.txt_cleanupOnFailure_succeeded1 by DFSClient_148109060
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 3
    [junit] 11/07/19 22:30:27 INFO ipc.Server: Stopping server on 59618
    [junit] 11/07/19 22:30:27 INFO ipc.Server: Stopping IPC Server listener on 59618
    [junit] 11/07/19 22:30:27 INFO ipc.Server: IPC Server handler 0 on 59618: exiting
    [junit] 11/07/19 22:30:27 INFO ipc.Server: IPC Server handler 2 on 59618: exiting
    [junit] 11/07/19 22:30:27 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 22:30:27 INFO ipc.Server: IPC Server handler 1 on 59618: exiting
    [junit] 11/07/19 22:30:27 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 22:30:27 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:48217, storageID=DS-975231673-127.0.1.1-48217-1311114259724, infoPort=49713, ipcPort=59618):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 22:30:27 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 22:30:27 INFO datanode.DataNode: Deleting block blk_4800584608513647894_1124 file build/test/data/dfs/data/data1/current/blk_4800584608513647894
    [junit] 11/07/19 22:30:27 INFO datanode.DataNode: Deleting block blk_4896791404803913953_1123 file build/test/data/dfs/data/data2/current/blk_4896791404803913953
    [junit] 11/07/19 22:30:27 INFO datanode.DataNode: Deleting block blk_7816747196754742145_1122 file build/test/data/dfs/data/data1/current/blk_7816747196754742145
    [junit] 11/07/19 22:30:28 INFO datanode.DataNode: Deleting block blk_4800584608513647894_1124 file build/test/data/dfs/data/data4/current/blk_4800584608513647894
    [junit] 11/07/19 22:30:28 WARN datanode.DataNode: Unexpected error trying to delete block blk_7816747196754742145_1122. BlockInfo not found in volumeMap.
    [junit] 11/07/19 22:30:28 WARN datanode.DataNode: Error processing datanode Command
    [junit] java.io.IOException: Error in deleting blocks.
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1361)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:868)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:830)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:710)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1186)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/19 22:30:28 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 22:30:28 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:48217, storageID=DS-975231673-127.0.1.1-48217-1311114259724, infoPort=49713, ipcPort=59618):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data7/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data8/current'}>
    [junit] 11/07/19 22:30:28 INFO ipc.Server: Stopping server on 59618
    [junit] 11/07/19 22:30:28 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 2
    [junit] 11/07/19 22:30:28 INFO ipc.Server: Stopping server on 59120
    [junit] 11/07/19 22:30:28 INFO ipc.Server: IPC Server handler 1 on 59120: exiting
    [junit] 11/07/19 22:30:28 INFO ipc.Server: IPC Server handler 0 on 59120: exiting
    [junit] 11/07/19 22:30:28 INFO ipc.Server: IPC Server handler 2 on 59120: exiting
    [junit] 11/07/19 22:30:28 INFO ipc.Server: Stopping IPC Server listener on 59120
    [junit] 11/07/19 22:30:28 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 22:30:28 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 22:30:28 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:60821, storageID=DS-288536262-127.0.1.1-60821-1311114259415, infoPort=58733, ipcPort=59120):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 22:30:29 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 22:30:29 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:54688 to delete  blk_-8809969301119147156_1126 blk_-2650349933054350032_1127
    [junit] 11/07/19 22:30:29 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:58170 to delete  blk_-8809969301119147156_1126 blk_8316144192034219252_1121 blk_-2650349933054350032_1127
    [junit] 11/07/19 22:30:29 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 22:30:29 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:60821, storageID=DS-288536262-127.0.1.1-60821-1311114259415, infoPort=58733, ipcPort=59120):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data5/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data6/current'}>
    [junit] 11/07/19 22:30:29 INFO ipc.Server: Stopping server on 59120
    [junit] 11/07/19 22:30:29 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 11/07/19 22:30:29 INFO ipc.Server: Stopping server on 43783
    [junit] 11/07/19 22:30:29 INFO ipc.Server: IPC Server handler 0 on 43783: exiting
    [junit] 11/07/19 22:30:29 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 22:30:29 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 22:30:29 INFO ipc.Server: IPC Server handler 1 on 43783: exiting
    [junit] 11/07/19 22:30:29 INFO ipc.Server: Stopping IPC Server listener on 43783
    [junit] 11/07/19 22:30:29 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:54688, storageID=DS-1036797753-127.0.1.1-54688-1311114259116, infoPort=56472, ipcPort=43783):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 22:30:29 INFO ipc.Server: IPC Server handler 2 on 43783: exiting
    [junit] 11/07/19 22:30:30 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 22:30:30 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 22:30:30 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:54688, storageID=DS-1036797753-127.0.1.1-54688-1311114259116, infoPort=56472, ipcPort=43783):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data4/current'}>
    [junit] 11/07/19 22:30:30 INFO ipc.Server: Stopping server on 43783
    [junit] 11/07/19 22:30:30 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 0
    [junit] 11/07/19 22:30:30 INFO ipc.Server: Stopping server on 57189
    [junit] 11/07/19 22:30:30 INFO ipc.Server: IPC Server handler 0 on 57189: exiting
    [junit] 11/07/19 22:30:30 INFO ipc.Server: IPC Server handler 1 on 57189: exiting
    [junit] 11/07/19 22:30:30 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 22:30:30 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:58170, storageID=DS-26827326-127.0.1.1-58170-1311114258809, infoPort=39543, ipcPort=57189):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 22:30:30 INFO ipc.Server: Stopping IPC Server listener on 57189
    [junit] 11/07/19 22:30:30 INFO ipc.Server: IPC Server handler 2 on 57189: exiting
    [junit] 11/07/19 22:30:30 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 22:30:30 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 22:30:30 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:58170, storageID=DS-26827326-127.0.1.1-58170-1311114258809, infoPort=39543, ipcPort=57189):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current'}>
    [junit] 11/07/19 22:30:30 INFO ipc.Server: Stopping server on 57189
    [junit] 11/07/19 22:30:30 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 22:30:31 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 22:30:31 INFO namenode.DecommissionManager: Interrupted Monitor
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/19 22:30:31 INFO namenode.FSNamesystem: Number of transactions: 694 Total time for transactions(ms): 21Number of transactions batched in Syncs: 115 Number of syncs: 483 SyncTimes(ms): 3172 325 
    [junit] 11/07/19 22:30:31 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/07/19 22:30:31 INFO ipc.Server: Stopping server on 36580
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 0 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: Stopping IPC Server listener on 36580
    [junit] 11/07/19 22:30:31 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 3 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 7 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 2 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 8 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 6 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 5 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 4 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 9 on 36580: exiting
    [junit] 11/07/19 22:30:31 INFO ipc.Server: IPC Server handler 1 on 36580: exiting
    [junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 369.801 sec
    [junit] Running org.apache.pig.test.TestStringUDFs
    [junit] 11/07/19 22:30:32 WARN builtin.SUBSTRING: java.lang.NullPointerException
    [junit] 11/07/19 22:30:32 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/19 22:30:32 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
    [junit] 11/07/19 22:30:32 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -8
    [junit] 11/07/19 22:30:32 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/19 22:30:32 WARN builtin.INDEXOF: Failed to process input; error - null
    [junit] 11/07/19 22:30:32 WARN builtin.LAST_INDEX_OF: Failed to process input; error - null
    [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 0.116 sec
   [delete] Deleting directory /tmp/pig_junit_tmp641123939

BUILD FAILED
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:664: The following error occurred while executing this line:
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:719: Tests failed!

Total time: 18 minutes 27 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording test results
Publishing Javadoc
Archiving artifacts
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure


Build failed in Jenkins: Pig-trunk #1048

Posted by Apache Jenkins Server <je...@builds.apache.org>.
See <https://builds.apache.org/job/Pig-trunk/1048/changes>

Changes:

[daijy] Make Pig work with hadoop .NEXT

[daijy] PIG-2159: New logical plan uses incorrect class for SUM causing for ClassCastException

[daijy] PIG-2172: Fix test failure for ant 1.8.x

------------------------------------------
[...truncated 39329 lines...]
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-4310659416812167551.txt_cleanupOnFailure_succeeded by DFSClient_1282029458
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] 11/07/19 10:33:39 WARN hdfs.StateChange: DIR* NameSystem.completeFile: failed to complete /tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1 because dir.getFileBlocks() is null  and pendingFile is null
    [junit] 11/07/19 10:33:39 INFO ipc.Server: IPC Server handler 2 on 41941, call complete(/tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1, DFSClient_1282029458) from 127.0.0.1:36483: error: java.io.IOException: Could not complete write to file /tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1 by DFSClient_1282029458
    [junit] java.io.IOException: Could not complete write to file /tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1 by DFSClient_1282029458
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 11/07/19 10:33:39 ERROR hdfs.DFSClient: Exception closing file /tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1 : org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1 by DFSClient_1282029458
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] org.apache.hadoop.ipc.RemoteException: java.io.IOException: Could not complete write to file /tmp/TestStore-output-6181164642597706789.txt_cleanupOnFailure_succeeded1 by DFSClient_1282029458
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:449)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
    [junit] 	at java.security.AccessController.doPrivileged(Native Method)
    [junit] 	at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit] 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
    [junit] 
    [junit] 	at org.apache.hadoop.ipc.Client.call(Client.java:740)
    [junit] 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    [junit] 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    [junit] 	at $Proxy0.complete(Unknown Source)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3264)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3188)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:1043)
    [junit] 	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:237)
    [junit] 	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:269)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:83)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77)
    [junit] 	at org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68)
    [junit] 	at org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:127)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit] 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit] 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit] 	at java.lang.reflect.Method.invoke(Method.java:597)
    [junit] 	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    [junit] 	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    [junit] 	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    [junit] 	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
    [junit] 	at org.junit.runners.ParentRunner.run(ParentRunner.java:220)
    [junit] 	at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911)
    [junit] 	at org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768)
    [junit] Shutting down the Mini HDFS Cluster
    [junit] Shutting down DataNode 3
    [junit] 11/07/19 10:33:39 INFO ipc.Server: Stopping server on 38451
    [junit] 11/07/19 10:33:39 INFO ipc.Server: IPC Server handler 0 on 38451: exiting
    [junit] 11/07/19 10:33:39 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 10:33:39 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 10:33:39 INFO ipc.Server: Stopping IPC Server listener on 38451
    [junit] 11/07/19 10:33:39 INFO ipc.Server: IPC Server handler 2 on 38451: exiting
    [junit] 11/07/19 10:33:39 INFO ipc.Server: IPC Server handler 1 on 38451: exiting
    [junit] 11/07/19 10:33:39 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:48248, storageID=DS-20126214-127.0.1.1-48248-1311071251998, infoPort=40975, ipcPort=38451):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 10:33:39 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:48248, storageID=DS-20126214-127.0.1.1-48248-1311071251998, infoPort=40975, ipcPort=38451):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data7/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data8/current'}>
    [junit] 11/07/19 10:33:40 INFO ipc.Server: Stopping server on 38451
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 2
    [junit] 11/07/19 10:33:40 INFO ipc.Server: Stopping server on 53239
    [junit] 11/07/19 10:33:40 INFO ipc.Server: IPC Server handler 0 on 53239: exiting
    [junit] 11/07/19 10:33:40 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 10:33:40 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:42326, storageID=DS-1415343713-127.0.1.1-42326-1311071251704, infoPort=54465, ipcPort=53239):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 10:33:40 INFO ipc.Server: IPC Server handler 1 on 53239: exiting
    [junit] 11/07/19 10:33:40 INFO ipc.Server: IPC Server handler 2 on 53239: exiting
    [junit] 11/07/19 10:33:40 INFO ipc.Server: Stopping IPC Server listener on 53239
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: BlockReport of 6 blocks got processed in 2 msecs
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Deleting block blk_-2046756786705305301_1124 file build/test/data/dfs/data/data3/current/blk_-2046756786705305301
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Deleting block blk_464773406130336812_1123 file build/test/data/dfs/data/data4/current/blk_464773406130336812
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Deleting block blk_5824626566657843290_1122 file build/test/data/dfs/data/data3/current/blk_5824626566657843290
    [junit] 11/07/19 10:33:40 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:42326, storageID=DS-1415343713-127.0.1.1-42326-1311071251704, infoPort=54465, ipcPort=53239):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data5/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data6/current'}>
    [junit] 11/07/19 10:33:40 INFO ipc.Server: Stopping server on 53239
    [junit] 11/07/19 10:33:40 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] Shutting down DataNode 1
    [junit] 11/07/19 10:33:41 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 10:33:41 INFO ipc.Server: Stopping server on 54632
    [junit] 11/07/19 10:33:41 INFO ipc.Server: IPC Server handler 0 on 54632: exiting
    [junit] 11/07/19 10:33:41 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 10:33:41 INFO ipc.Server: IPC Server handler 1 on 54632: exiting
    [junit] 11/07/19 10:33:41 INFO ipc.Server: IPC Server handler 2 on 54632: exiting
    [junit] 11/07/19 10:33:41 INFO ipc.Server: Stopping IPC Server listener on 54632
    [junit] 11/07/19 10:33:41 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 10:33:41 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:57264, storageID=DS-1741254656-127.0.1.1-57264-1311071251383, infoPort=50155, ipcPort=54632):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 10:33:41 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 10:33:41 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:42326 to delete  blk_-4905632516841518502_1126 blk_-3045267291178481173_1127
    [junit] 11/07/19 10:33:41 INFO hdfs.StateChange: BLOCK* ask 127.0.0.1:57264 to delete  blk_863817712174337058_1121 blk_-4905632516841518502_1126 blk_-3045267291178481173_1127
    [junit] Shutting down DataNode 0
    [junit] 11/07/19 10:33:42 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 10:33:42 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:57264, storageID=DS-1741254656-127.0.1.1-57264-1311071251383, infoPort=50155, ipcPort=54632):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data4/current'}>
    [junit] 11/07/19 10:33:42 INFO ipc.Server: Stopping server on 54632
    [junit] 11/07/19 10:33:42 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 10:33:42 INFO ipc.Server: Stopping server on 38436
    [junit] 11/07/19 10:33:42 INFO ipc.Server: IPC Server handler 0 on 38436: exiting
    [junit] 11/07/19 10:33:42 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 1
    [junit] 11/07/19 10:33:42 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:35943, storageID=DS-2064813100-127.0.1.1-35943-1311071251084, infoPort=47839, ipcPort=38436):DataXceiveServer: java.nio.channels.AsynchronousCloseException
    [junit] 	at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
    [junit] 	at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
    [junit] 	at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
    [junit] 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:130)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 
    [junit] 11/07/19 10:33:42 INFO ipc.Server: Stopping IPC Server Responder
    [junit] 11/07/19 10:33:42 INFO ipc.Server: IPC Server handler 2 on 38436: exiting
    [junit] 11/07/19 10:33:42 INFO ipc.Server: Stopping IPC Server listener on 38436
    [junit] 11/07/19 10:33:42 INFO ipc.Server: IPC Server handler 1 on 38436: exiting
    [junit] 11/07/19 10:33:42 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
    [junit] 11/07/19 10:33:43 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 10:33:43 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:35943, storageID=DS-2064813100-127.0.1.1-35943-1311071251084, infoPort=47839, ipcPort=38436):Finishing DataNode in: FSDataset{dirpath='<https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/grid/0/hudson/hudson-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current'}>
    [junit] 11/07/19 10:33:43 INFO ipc.Server: Stopping server on 38436
    [junit] 11/07/19 10:33:43 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
    [junit] 11/07/19 10:33:43 WARN namenode.FSNamesystem: ReplicationMonitor thread received InterruptedException.java.lang.InterruptedException: sleep interrupted
    [junit] 11/07/19 10:33:43 INFO namenode.FSNamesystem: Number of transactions: 694 Total time for transactions(ms): 12Number of transactions batched in Syncs: 114 Number of syncs: 484 SyncTimes(ms): 3257 287 
    [junit] 11/07/19 10:33:43 INFO namenode.DecommissionManager: Interrupted Monitor
    [junit] java.lang.InterruptedException: sleep interrupted
    [junit] 	at java.lang.Thread.sleep(Native Method)
    [junit] 	at org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    [junit] 	at java.lang.Thread.run(Thread.java:662)
    [junit] 11/07/19 10:33:43 INFO ipc.Server: Stopping server on 41941
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 0 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 2 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 8 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 4 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: Stopping IPC Server listener on 41941
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 7 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 5 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 9 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: Stopping IPC Server Responder
    [junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 368.924 sec
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 3 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 6 on 41941: exiting
    [junit] 11/07/19 10:33:43 INFO ipc.Server: IPC Server handler 1 on 41941: exiting
    [junit] Running org.apache.pig.test.TestStringUDFs
    [junit] 11/07/19 10:33:43 WARN builtin.SUBSTRING: java.lang.NullPointerException
    [junit] 11/07/19 10:33:43 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/19 10:33:43 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -1
    [junit] 11/07/19 10:33:43 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -8
    [junit] 11/07/19 10:33:43 WARN builtin.SUBSTRING: java.lang.StringIndexOutOfBoundsException: String index out of range: -2
    [junit] 11/07/19 10:33:43 WARN builtin.INDEXOF: Failed to process input; error - null
    [junit] 11/07/19 10:33:43 WARN builtin.LAST_INDEX_OF: Failed to process input; error - null
    [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 0.107 sec
   [delete] Deleting directory /tmp/pig_junit_tmp1777959676

BUILD FAILED
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:664: The following error occurred while executing this line:
<https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:719: Tests failed!

Total time: 21 minutes 13 seconds
[FINDBUGS] Skipping publisher since build result is FAILURE
Recording test results
Publishing Javadoc
Archiving artifacts
Recording fingerprints
Publishing Clover coverage report...
No Clover report will be published due to a Build Failure