You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Milind Bhandarkar (JIRA)" <ji...@apache.org> on 2006/09/26 23:15:52 UTC

[jira] Assigned: (HADOOP-508) random seeks using FSDataInputStream can become invalid such that reads return invalid data

     [ http://issues.apache.org/jira/browse/HADOOP-508?page=all ]

Milind Bhandarkar reassigned HADOOP-508:
----------------------------------------

    Assignee: Milind Bhandarkar

> random seeks using FSDataInputStream can become invalid such that reads return invalid data
> -------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-508
>                 URL: http://issues.apache.org/jira/browse/HADOOP-508
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.5.0
>            Reporter: Christian Kunz
>         Assigned To: Milind Bhandarkar
>
> Some of my applications using Hadoop DFS  receive wrong data after certain random seeks. After some investigation I believe (without looking at source code of java.io.BufferedInputStream) that it basically boils down to the fact that the method 
> read(byte[] b, int off, int len), when called with an external buffer larger than the internal buffer, reads into the external buffer directly without using the internal buffer anymore, but without invalidating the internal buffer by setting the variable 'count' to 0 such that a subsequent seek to an offset which is closer to the 'position' of the Positioncache than the internal buffersize will put the current position into the internal buffer containing outdated data from somewhere else.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira