You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2022/07/04 20:43:38 UTC

[GitHub] [beam] Enzo90910 opened a new issue, #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Enzo90910 opened a new issue, #22150:
URL: https://github.com/apache/beam/issues/22150

   ### What happened?
   
   I've been trying to use hadoopfilesystem.HadoopFileSystem to write TFX artefacts to HDFS.  When the file I try to write is smaller than 20MB everything is fine. But when it is larger than 20MB (python's BufferedWriter default buffer size), the written file is corrupted without any error being detected.
   I believe it is because HdfsUploader uses the HDFScli (and through it, the WebHDFS API) as if one could stream write a file after opening it, as is usually the case for filesystems, but it doesn't seem to be the case for WebHDFS/HDFScli: the file must be written all at once with exactly one PUT query.
   - When the whole file is smaller than the buffer of BufferedWriter, it gets written all at once when closing the BufferedWriter and everything works.
   - When the file is larger than the buffer, several write calls are made to the hdfs.InsecureClient on the *same* file handle and all hell breaks loose.
   
   If I am right, this should be rather difficult to fix since beam's writing API necessitates being able to stream to a file, and WebHDFS doesn't seem to allow it.
   
   ### Issue Priority
   
   Priority: 1
   
   ### Issue Component
   
   Component: io-py-hadoop


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] Enzo90910 commented on issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
Enzo90910 commented on issue #22150:
URL: https://github.com/apache/beam/issues/22150#issuecomment-1177301711

   I checked the pull request and fully agree with the fix (and the analysis). 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] Abacn commented on issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
Abacn commented on issue #22150:
URL: https://github.com/apache/beam/issues/22150#issuecomment-1176600332

   .take-issue


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] Enzo90910 commented on issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
Enzo90910 commented on issue #22150:
URL: https://github.com/apache/beam/issues/22150#issuecomment-1174343788

   Last remark: Although I am not exactly sure why, the bug doesn't manifest when writing the big file all at once. Likely in that case it bypasses the BufferedWriter entirely.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] Enzo90910 commented on issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
Enzo90910 commented on issue #22150:
URL: https://github.com/apache/beam/issues/22150#issuecomment-1174324226

   Obvious workarounds:
   - (for TFX) making your artefacts smaller by using more tasks/parallelism
   - making the BufferedWriter's buffer larger if your workers have enough memory (hadoopfilesystem.py line 253)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] Abacn commented on issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
Abacn commented on issue #22150:
URL: https://github.com/apache/beam/issues/22150#issuecomment-1176538410

   Reproduced using a local (pseudo-distributed) hdfs. The snippet above generates a file with desired size of 30888890 bytes, but 62006 records are missing while 45116 records are duplicated.
   
   .take-issue
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] Enzo90910 commented on issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
Enzo90910 commented on issue #22150:
URL: https://github.com/apache/beam/issues/22150#issuecomment-1174331341

   Additional notes: I have reproduced this bug without using TFX at all, only with a python 3.7 container, beam 2.39, and writing text in a big file as in:
   `fs = apache_beam.io.hadoopfilesystem.HadoopFileSystem(apache_beam.options.pipeline_options.PipelineOptions.from_dictionary({"hdfs_host": "hadoop-hadoop-hdfs-nn.hdfs", "hdfs_port":"50070", "hdfs_user":"root"}))
   writer = fs.create("hdfs://myfile.txt")
   for i in range(4000000):
       writer.write(bytes(str(i) + "\n", 'utf-8'))
   writer.close()
   `


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [beam] pabloem closed issue #22150: [Bug]: Writing more than 20MB to HDFS results in silent data corruption.

Posted by GitBox <gi...@apache.org>.
pabloem closed issue #22150: [Bug]:  Writing more than 20MB to HDFS results in silent data corruption.
URL: https://github.com/apache/beam/issues/22150


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@beam.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org