You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "zhangshuyan0 (via GitHub)" <gi...@apache.org> on 2023/05/02 03:39:47 UTC

[GitHub] [hadoop] zhangshuyan0 opened a new pull request, #5612: HADOOP-18726. Set the locale to avoid printing useless logs.

zhangshuyan0 opened a new pull request, #5612:
URL: https://github.com/apache/hadoop/pull/5612

   In our production environment, if the hadoop process is started in a non-English environment, many unexpected error logs will be printed. The following is the error message printed by datanode.
   ```
   2023-05-01 09:10:50,299 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
   2023-05-01 09:10:50,299 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception: 
   java.io.IOException: 断开的管道
           at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
           at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
           at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
           at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
           at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
           at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
           at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
           at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
           at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
           at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
           at java.lang.Thread.run(Thread.java:745)
   2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
   2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
   2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
   2023-05-01 09:10:50,298 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
   2023-05-01 09:10:50,302 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception: 
   java.io.IOException: 断开的管道
           at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
           at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
           at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
           at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
           at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
           at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:559)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
           at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
           at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
           at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
           at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
           at java.lang.Thread.run(Thread.java:745)
   2023-05-01 09:10:50,303 ERROR org.apache.hadoop.hdfs.server.datanode.FileIoProvider: error in op transferToSocketFully : 断开的管道
   2023-05-01 09:10:50,303 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: BlockSender.sendChunks() exception: 
   java.io.IOException: 断开的管道
           at sun.nio.ch.FileChannelImpl.transferTo0(Native Method)
           at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428)
           at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493)
           at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608)
           at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:242)
           at org.apache.hadoop.hdfs.server.datanode.FileIoProvider.transferToSocketFully(FileIoProvider.java:260)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:568)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:801)
           at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:755)
           at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:580)
           at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
           at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
           at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:258)
           at java.lang.Thread.run(Thread.java:745)
   ```
   The reason for this situation is that the code uses the message of IOException to determine whether to print Exception logs, but different locales will change the content of the message.
   https://github.com/apache/hadoop/blob/87e17b2713600badbad1daceb72f2a9139f3de10/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java#L654-L666
   This large number of error logs is very misleading, so this patch sets the environment variable LANG in hadoop-env.sh.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Hexiaoqiao merged pull request #5612: HADOOP-18726. Set the locale to avoid printing useless logs.

Posted by "Hexiaoqiao (via GitHub)" <gi...@apache.org>.
Hexiaoqiao merged PR #5612:
URL: https://github.com/apache/hadoop/pull/5612


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Hexiaoqiao commented on pull request #5612: HADOOP-18726. Set the locale to avoid printing useless logs.

Posted by "Hexiaoqiao (via GitHub)" <gi...@apache.org>.
Hexiaoqiao commented on PR #5612:
URL: https://github.com/apache/hadoop/pull/5612#issuecomment-1531744247

   Committed to trunk. Thanks @zhangshuyan0 for your contribution! Thanks @ayushtkn for your reviews!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5612: HADOOP-18726. Set the locale to avoid printing useless logs.

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #5612:
URL: https://github.com/apache/hadoop/pull/5612#issuecomment-1530888344

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 22s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 17s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 44s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not generate ASF License warnings.  |
   |  |   |  88m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5612/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5612 |
   | Optional Tests | dupname asflicense mvnsite unit codespell detsecrets shellcheck shelldocs |
   | uname | Linux 0ba13099fc75 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7bcd58a297363a51d005b50b32f6d7abd752e2b9 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5612/1/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5612/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org