You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Jim Kellerman (JIRA)" <ji...@apache.org> on 2007/07/23 17:28:31 UTC

[jira] Commented: (HADOOP-1640) TestDecommission fails on Windows

    [ https://issues.apache.org/jira/browse/HADOOP-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12514653 ] 

Jim Kellerman commented on HADOOP-1640:
---------------------------------------

So long as the timeout applies just to this test I'd agree.

There are a couple of HBase tests that take about two minutes or a bit more, so applying a universal timeout would be unacceptable.

However, if we could specify test timeouts on a per test basis, that would be a ++1.

> TestDecommission fails on Windows
> ---------------------------------
>
>                 Key: HADOOP-1640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1640
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Nigel Daley
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.14.0
>
>         Attachments: testDecommission1640.patch
>
>
> In the snippet of test log below, the exception happens every ~15 milliseconds for 15 minutes until the test is timed out:
>     [junit] Created file decommission.dat with 2 replicas.
>     [junit] Block[0] : xxx xxx 
>     [junit] Block[1] : xxx xxx 
>     [junit] Decommissioning node: 127.0.0.1:50013
>     [junit] 2007-07-19 19:12:45,059 INFO  fs.FSNamesystem (FSNamesystem.java:startDecommission(2572)) - Start Decommissioning node 127.0.0.1:50013
>     [junit] Name: 127.0.0.1:50013
>     [junit] State          : Decommission in progress
>     [junit] Total raw bytes: 80030941184 (74.53 GB)
>     [junit] Used raw bytes: 33940945746 (31.60 GB)
>     [junit] % used: 42.40%
>     [junit] Last contact: Thu Jul 19 19:12:44 PDT 2007
>     [junit] Waiting for node 127.0.0.1:50013 to change state to DECOMMISSIONED
>     [junit] 2007-07-19 19:12:45,199 INFO  http.SocketListener (SocketListener.java:stop(212)) - Stopped SocketListener on 0.0.0.0:3147
>     [junit] 2007-07-19 19:12:45,199 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1d98a
>     [junit] 2007-07-19 19:12:45,293 INFO  util.Container (Container.java:stop(156)) - Stopped WebApplicationContext[/,/]
>     [junit] 2007-07-19 19:12:45,402 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/logs,/logs]
>     [junit] 2007-07-19 19:12:45,481 INFO  util.Container (Container.java:stop(156)) - Stopped HttpContext[/static,/static]
>     [junit] 2007-07-19 19:12:45,481 INFO  util.Container (Container.java:stop(156)) - Stopped org.mortbay.jetty.Server@f1916f
>     [junit] 2007-07-19 19:12:45,496 INFO  dfs.DataNode (DataNode.java:run(692)) - Exiting DataXceiveServer due to java.net.SocketException: socket closed
>     [junit] 2007-07-19 19:12:45,496 WARN  dfs.DataNode (DataNode.java:offerService(568)) - java.io.IOException: java.lang.InterruptedException
>     [junit] 	at org.apache.hadoop.fs.DF.doDF(DF.java:71)
>     [junit] 	at org.apache.hadoop.fs.DF.getCapacity(DF.java:89)
>     [junit] 	at org.apache.hadoop.dfs.FSDataset$FSVolume.getCapacity(FSDataset.java:292)
>     [junit] 	at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getCapacity(FSDataset.java:379)
>     [junit] 	at org.apache.hadoop.dfs.FSDataset.getCapacity(FSDataset.java:466)
>     [junit] 	at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:493)
>     [junit] 	at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1306)
>     [junit] 	at java.lang.Thread.run(Thread.java:595)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.