You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Owen O'Malley (JIRA)" <ji...@apache.org> on 2007/05/15 23:05:17 UTC

[jira] Updated: (HADOOP-1372) DFS Clients should start using the org.apache.hadoop.fs.LocalDirAllocator

     [ https://issues.apache.org/jira/browse/HADOOP-1372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Owen O'Malley updated HADOOP-1372:
----------------------------------

    Description: 
I encountered this exception during one of the randomwriter runs. I think this situation can be improved by using org.apache.hadoop.fs.LocalDirAllocator that has been written to handle these kind of problems. I set the fix version as 0.14 but wonder whether it makes sense to have it in 0.13 itself (since the amount of code change would not be much).

java.io.FileNotFoundException: /local/dfs/data/tmp/client-1299146109450372217 (Read-only file system)
	at java.io.FileOutputStream.open(Native Method)
	at java.io.FileOutputStream.(FileOutputStream.java:179)
	at java.io.FileOutputStream.(FileOutputStream.java:131)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1356)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1273)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1255)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
	at java.io.DataOutputStream.write(DataOutputStream.java:90)
	at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(ChecksumFileSystem.java:402)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
	at java.io.DataOutputStream.write(DataOutputStream.java:90)
	at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:775)
	at org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:152)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187)
	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1709)

  was:
I encountered this exception during one of the randomwriter runs. I think this situation can be improved by using org.apache.hadoop.fs.LocalDirAllocator that has been written to handle these kind of problems. I set the fix version as 0.14 but wonder whether it makes sense to have it in 0.13 itself (since the amount of code change would not be much).

java.io.FileNotFoundException: /export/crawlspace4/kryptonite/ddas/dfs/data/tmp/client-1299146109450372217 (Read-only file system)
	at java.io.FileOutputStream.open(Native Method)
	at java.io.FileOutputStream.(FileOutputStream.java:179)
	at java.io.FileOutputStream.(FileOutputStream.java:131)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1356)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1273)
	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1255)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
	at java.io.DataOutputStream.write(DataOutputStream.java:90)
	at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(ChecksumFileSystem.java:402)
	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
	at java.io.DataOutputStream.write(DataOutputStream.java:90)
	at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:775)
	at org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:152)
	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187)
	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1709)


> DFS Clients should start using the org.apache.hadoop.fs.LocalDirAllocator
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-1372
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1372
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Devaraj Das
>             Fix For: 0.14.0
>
>
> I encountered this exception during one of the randomwriter runs. I think this situation can be improved by using org.apache.hadoop.fs.LocalDirAllocator that has been written to handle these kind of problems. I set the fix version as 0.14 but wonder whether it makes sense to have it in 0.13 itself (since the amount of code change would not be much).
> java.io.FileNotFoundException: /local/dfs/data/tmp/client-1299146109450372217 (Read-only file system)
> 	at java.io.FileOutputStream.open(Native Method)
> 	at java.io.FileOutputStream.(FileOutputStream.java:179)
> 	at java.io.FileOutputStream.(FileOutputStream.java:131)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.endBlock(DFSClient.java:1356)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.flush(DFSClient.java:1273)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.write(DFSClient.java:1255)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
> 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.write(ChecksumFileSystem.java:402)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:38)
> 	at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
> 	at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109)
> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
> 	at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:775)
> 	at org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:152)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:187)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1709)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.