You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@slider.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2014/09/03 19:03:53 UTC

[jira] [Commented] (SLIDER-377) slider MiniHDFSCluster tests failing on windows+branch2

    [ https://issues.apache.org/jira/browse/SLIDER-377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14120091#comment-14120091 ] 

Steve Loughran commented on SLIDER-377:
---------------------------------------

{core}
-------------------------------------------------------
 T E S T S
-------------------------------------------------------

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
Running org.apache.slider.core.persist.TestConfPersisterLocksHDFS
2014-09-03 17:42:20,723 [JUnit] DEBUG test.YarnMiniClusterTestBase (NativeMethodAccessorImpl.java:invoke0(?)) - java.lib
rary.path = c:\java\jdk7\jre\bin;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;C:\Windows\Microsoft.NET\Framewo
rk64\v4.0.30319;C:\Windows\Microsoft.NET\Framework\v4.0.30319;C:\Windows\Microsoft.NET\Framework64\v3.5;C:\Windows\Micro
soft.NET\Framework\v3.5;;C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE;C:\Program Files (x86)\Microsof
t Visual Studio 10.0\Common7\Tools;;C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\Bin\amd64;C:\Program Files (x
86)\Microsoft Visual Studio 10.0\VC\Bin\VCPackages;;C:\SDK\Bin\NETFX 4.0 Tools\x64;C:\SDK\Bin\x64;C:\SDK\Bin;;C:\bin\cyg
win64\bin;C:\bin\Python27\;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell
\v1.0\;c:\java\jdk7\bin;C:\Program Files\Microsoft Windows Performance Toolkit\;C:\apps\maven\bin;C:\apps\Git\cmd;c:\bin
;C:\Program Files (x86)\MSBuild\12.0\Bin;C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin;C:\bin\cmake-2.8\bin
;C:\bin\hadoop\bin;;.
Formatting using clusterid: testClusterID
2014-09-03 17:42:22,083 [JUnit] INFO  Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1023)) - hadoop
.configured.node.mapping is deprecated. Instead, use net.topology.configured.node.mapping
2014-09-03 17:42:22,083 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 BlocksMap
2014-09-03 17:42:22,083 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:22,083 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 910.5 MB =
 18.2 MB
2014-09-03 17:42:22,083 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^21 = 209
7152 entries
2014-09-03 17:42:22,317 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 INodeMap
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 910.5 MB =
 9.1 MB
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^20 = 104
8576 entries
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 cachedBlocks
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 910.5 MB
= 2.3 MB
2014-09-03 17:42:22,333 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^18 = 262
144 entries
2014-09-03 17:42:22,348 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 NameNodeRetryCache
2014-09-03 17:42:22,348 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:22,348 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max
memory 910.5 MB = 279.7 KB
2014-09-03 17:42:22,348 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^15 = 327
68 entries
2014-09-03 17:42:23,051 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapte
r(org.mortbay.log) via org.mortbay.log.Slf4jLog
2014-09-03 17:42:23,051 [JUnit] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for
 http.requests.namenode is not defined
2014-09-03 17:42:23,067 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(698)) - Added global filter 'sa
fety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2014-09-03 17:42:23,067 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addFilter(676)) - Added filter static_user_filt
er (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2014-09-03 17:42:23,067 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addFilter(683)) - Added filter static_user_filt
er (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-09-03 17:42:23,098 [JUnit] INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(86)) - Added filter 'org.apa
che.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2014-09-03 17:42:23,098 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(602)) - addJerseyResou
rcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathS
pec=/webhdfs/v1/*
2014-09-03 17:42:23,114 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:openListeners(886)) - Jetty bound to port 49861

2014-09-03 17:42:23,114 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2014-09-03 17:42:23,161 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Extract jar:file:/C:/Users/Administrator/.m
2/repository/org/apache/hadoop/hadoop-hdfs/2.6.0-SNAPSHOT/hadoop-hdfs-2.6.0-SNAPSHOT-tests.jar!/webapps/hdfs to C:\Users
\ADMINI~1\AppData\Local\Temp\Jetty_127_0_0_1_49861_hdfs____.qsp4nl\webapp
2014-09-03 17:42:23,447 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorW
ithSafeStartup@127.0.0.1:49861
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 BlocksMap
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 910.5 MB =
 18.2 MB
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^21 = 209
7152 entries
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 INodeMap
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 1.0% max memory 910.5 MB =
 9.1 MB
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^20 = 104
8576 entries
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 cachedBlocks
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.25% max memory 910.5 MB
= 2.3 MB
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^18 = 262
144 entries
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map
 NameNodeRetryCache
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 0.029999999329447746% max
memory 910.5 MB = 279.7 KB
2014-09-03 17:42:23,458 [JUnit] INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^15 = 327
68 entries
2014-09-03 17:42:23,973 [JUnit] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class ja
va.util.concurrent.LinkedBlockingQueue
2014-09-03 17:42:24,005 [Socket Reader #1 for port 49864] INFO  ipc.Server (Server.java:run(605)) - Starting Socket Read
er #1 for port 49864
2014-09-03 17:42:24,385 [IPC Server Responder] INFO  ipc.Server (Server.java:run(827)) - IPC Server Responder: starting
2014-09-03 17:42:24,385 [IPC Server listener on 49864] INFO  ipc.Server (Server.java:run(674)) - IPC Server listener on
49864: starting
2014-09-03 17:42:24,520 [JUnit] INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for
 http.requests.datanode is not defined
2014-09-03 17:42:24,520 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(698)) - Added global filter 'sa
fety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2014-09-03 17:42:24,520 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addFilter(676)) - Added filter static_user_filt
er (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2014-09-03 17:42:24,520 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addFilter(683)) - Added filter static_user_filt
er (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2014-09-03 17:42:24,536 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(602)) - addJerseyResou
rcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathS
pec=/webhdfs/v1/*
2014-09-03 17:42:24,536 [JUnit] INFO  http.HttpServer2 (HttpServer2.java:openListeners(886)) - Jetty bound to port 49872

2014-09-03 17:42:24,536 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26
2014-09-03 17:42:24,536 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Extract jar:file:/C:/Users/Administrator/.m
2/repository/org/apache/hadoop/hadoop-hdfs/2.6.0-SNAPSHOT/hadoop-hdfs-2.6.0-SNAPSHOT-tests.jar!/webapps/datanode to C:\U
sers\ADMINI~1\AppData\Local\Temp\Jetty_127_0_0_1_49872_datanode____y91m30\webapp
2014-09-03 17:42:24,739 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorW
ithSafeStartup@127.0.0.1:49872
2014-09-03 17:42:24,786 [JUnit] INFO  ipc.CallQueueManager (CallQueueManager.java:<init>(53)) - Using callQueue class ja
va.util.concurrent.LinkedBlockingQueue
2014-09-03 17:42:24,801 [Socket Reader #1 for port 49875] INFO  ipc.Server (Server.java:run(605)) - Starting Socket Read
er #1 for port 49875
2014-09-03 17:42:24,864 [IPC Server listener on 49875] INFO  ipc.Server (Server.java:run(674)) - IPC Server listener on
49875: starting
2014-09-03 17:42:24,864 [IPC Server Responder] INFO  ipc.Server (Server.java:run(827)) - IPC Server Responder: starting
2014-09-03 17:42:25,270 [DataNode: [[[DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data1/, [
DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data2/]]  heartbeating to /127.0.0.1:49864] WAR
N  datanode.DataNode (BPServiceActor.java:checkNNVersion(198)) - The reported NameNode version is too low to communicate
 with this DataNode. NameNode version: '2.6.0-SNAPSHOT' Minimum NameNode version: '3.0.0-SNAPSHOT'
2014-09-03 17:42:25,270 [DataNode: [[[DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data1/, [
DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data2/]]  heartbeating to /127.0.0.1:49864] FAT
AL datanode.DataNode (BPServiceActor.java:run(836)) - Initialization failed for Block pool <registering> (Datanode Uuid
unassigned) service to /127.0.0.1:49864. Exiting.
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: The reported NameNode version is too low to communicate
with this DataNode. NameNode version: '2.6.0-SNAPSHOT' Minimum NameNode version: '3.0.0-SNAPSHOT'
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.checkNNVersion(BPServiceActor.java:196)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.retrieveNamespaceInfo(BPServiceActor.java:183)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:215)
        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:824)
        at java.lang.Thread.run(Thread.java:745)
2014-09-03 17:42:25,286 [DataNode: [[[DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data1/, [
DISK]file:/C:/Work/slider/slider-core/target/hdfs/TestConfPersister/data/data2/]]  heartbeating to /127.0.0.1:49864] WAR
N  datanode.DataNode (BPServiceActor.java:run(857)) - Ending block pool service for: Block pool <registering> (Datanode
Uuid unassigned) service to /127.0.0.1:49864
2014-09-03 17:42:25,551 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:shouldWait(2218)) - BPOfferService in dat
anode DataNode{data=null, localName='127.0.0.1:49871', datanodeUuid='null', xmitsInProgress=0} failed to connect to name
node at 127.0.0.1/127.0.0.1:49864
2014-09-03 17:42:25,661 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:26,677 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:27,677 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:28,692 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:29,708 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:30,708 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:31,708 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:32,708 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:33,723 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:34,739 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:35,755 [JUnit] WARN  hdfs.MiniDFSCluster (MiniDFSCluster.java:waitClusterUp(1192)) - Waiting for the Mi
ni HDFS Cluster to start...
2014-09-03 17:42:36,755 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorW
ithSafeStartup@127.0.0.1:0
2014-09-03 17:42:36,864 [JUnit] INFO  ipc.Server (Server.java:stop(2437)) - Stopping server on 49875
2014-09-03 17:42:36,864 [IPC Server listener on 49875] INFO  ipc.Server (Server.java:run(706)) - Stopping IPC Server lis
tener on 49875
2014-09-03 17:42:36,864 [IPC Server Responder] INFO  ipc.Server (Server.java:run(832)) - Stopping IPC Server Responder
2014-09-03 17:42:36,864 [JUnit] INFO  ipc.Server (Server.java:stop(2437)) - Stopping server on 49864
2014-09-03 17:42:36,880 [IPC Server listener on 49864] INFO  ipc.Server (Server.java:run(706)) - Stopping IPC Server lis
tener on 49864
2014-09-03 17:42:36,880 [IPC Server Responder] INFO  ipc.Server (Server.java:run(832)) - Stopping IPC Server Responder
2014-09-03 17:42:36,880 [org.apache.hadoop.hdfs.server.blockmanagement.DecommissionManager$Monitor@3b6653c6] WARN  block
management.DecommissionManager (DecommissionManager.java:run(78)) - Monitor interrupted: java.lang.InterruptedException:
 sleep interrupted
2014-09-03 17:42:36,911 [JUnit] INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorW
ithSafeStartup@127.0.0.1:0
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 18.187 sec <<< FAILURE! - in org.apache.slider.core.pers
ist.TestConfPersisterLocksHDFS
org.apache.slider.core.persist.TestConfPersisterLocksHDFS  Time elapsed: 18.187 sec  <<< ERROR!
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
        at org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1197)
        at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:832)
        at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:467)
        at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:426)
        at org.apache.hadoop.hdfs.MiniDFSCluster$Builder$build.call(Unknown Source)
        at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
        at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
        at org.apache.slider.test.YarnMiniClusterTestBase.buildMiniHDFSCluster(YarnMiniClusterTestBase.groovy:285)
        at org.apache.slider.core.persist.TestConfPersisterLocksHDFS.createCluster(TestConfPersisterLocksHDFS.groovy:47)



Results :

Tests in error:
  TestConfPersisterLocksHDFS.createCluster:47->YarnMiniClusterTestBase.buildMiniHDFSCluster:285 ╗ IO

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0
{code}

> slider MiniHDFSCluster tests failing on windows+branch2
> -------------------------------------------------------
>
>                 Key: SLIDER-377
>                 URL: https://issues.apache.org/jira/browse/SLIDER-377
>             Project: Slider
>          Issue Type: Sub-task
>          Components: test, windows
>    Affects Versions: Slider 0.60
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>          Time Spent: 29h
>  Remaining Estimate: 0h
>
> Tests that use the MiniHDFSCluster are failing on windows with link errors -datanodes are failing on JNI linkage errors calculating CRC32 checksums



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)