You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Jingxuan Fu (Jira)" <ji...@apache.org> on 2022/04/25 05:03:00 UTC

[jira] [Commented] (HADOOP-18216) Ensure "io.file.buffer.size" is greater than zero. Otherwise, it will lead to data read/write blockage

    [ https://issues.apache.org/jira/browse/HADOOP-18216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17527306#comment-17527306 ] 

Jingxuan Fu commented on HADOOP-18216:
--------------------------------------

I commited a pr to truck. Can assign this issue to me?

> Ensure "io.file.buffer.size" is greater than zero. Otherwise, it will lead to data read/write blockage
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-18216
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18216
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Jingxuan Fu
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> when the configuration file in the "io.file.buffer.size" field is set to a value less than or equal to zero, hdfs can start normally, but read and write data will have problems.
> When the value is less than zero, the shell will throw the following exception:
> {code:java}
> hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred
> -cat: Fatal internal error
> java.lang.NegativeArraySizeException: -4096
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:93)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:68)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:129)
>         at org.apache.hadoop.fs.shell.Display$Cat.printToStdout(Display.java:101)
>         at org.apache.hadoop.fs.shell.Display$Cat.processPath(Display.java:96)
>         at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:331)
>         at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:303)
>         at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:285)
>         at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:269)
>         at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:120)
>         at org.apache.hadoop.fs.shell.Command.run(Command.java:176)
>         at org.apache.hadoop.fs.FsShell.run(FsShell.java:328)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>         at org.apache.hadoop.fs.FsShell.main(FsShell.java:391){code}
> When the value is equal to zero, the shell command will always block
> {code:java}
> hadoop@ljq1:~/hadoop-3.1.3-work/bin$ ./hdfs dfs -cat mapred
> ^Z
> [2]+  Stopped                 ./hdfs dfs -cat mapred{code}
> The description of the configuration file is not clear enough, it may make people think that set to 0 to enter the non-blocking mode.
>  
> {code:java}
> <property>   
>     <name>io.file.buffer.size</name>   
>     <value>4096</value>   
>     <description>The size of buffer for use in sequence files.   
>     The size of this buffer should probably be a multiple of hardware   
>     page size (4096 on Intel x86), and it determines how much data is   
>     buffered during read and write operations.</description> 
> </property>{code}
>  
> Considering that this value is uesd by hdfs and mapreduce frequently, we should make this value must be a number greater than zero.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org