You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hawq.apache.org by "Hongxu Ma (JIRA)" <ji...@apache.org> on 2018/06/21 02:33:00 UTC
[jira] [Comment Edited] (HAWQ-1627) Support setting the max
protocol message size when talking with HDFS
[ https://issues.apache.org/jira/browse/HAWQ-1627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16518816#comment-16518816 ]
Hongxu Ma edited comment on HAWQ-1627 at 6/21/18 2:32 AM:
----------------------------------------------------------
User can set this property in _etc/hdfs-client.xml_:
{code:java}
<property>
<name>ipc.maximum.data.length</name>
<value>134217728</value>
</property>
{code}
Default value is: 67108864(64M)
was (Author: hongxu ma):
User can set this value in _etc/hdfs-client.xml_:
{code}
<property>
<name>ipc.maximum.data.length</name>
<value>134217728</value>
</property>
{code}
Default value is: 67108864(64M)
> Support setting the max protocol message size when talking with HDFS
> --------------------------------------------------------------------
>
> Key: HAWQ-1627
> URL: https://issues.apache.org/jira/browse/HAWQ-1627
> Project: Apache HAWQ
> Issue Type: Improvement
> Components: libhdfs
> Reporter: Hongxu Ma
> Assignee: Hongxu Ma
> Priority: Major
> Fix For: 2.4.0.0-incubating
>
>
> Now, the max size of protocol message in libhdfs is 64MB, and it cannot be adjusted.
> When the max size is reached (accessing a very big HDFS table/file), will see the following lines in hawq master log:
> {code}
> 2018-06-20 11:21:56.768003 CST,,,p75703,th-848100416,,,,0,,,seg-10000,,,,,"LOG","00000","3rd party error log:
> [libprotobuf ERROR google/protobuf/io/coded_stream.cc:208] A protocol message was rejected because it was too big (more than 67108864 bytes). To increase the limit (or to
> disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.",,,,,,,,"SysLoggerMain","syslogger.c",518,
> 2018-06-20 11:21:56.771657 CST,,,p75703,th-848100416,,,,0,,,seg-10000,,,,,"LOG","00000","3rd party error log:
> 2018-06-20 11:21:56.771492, p75751, th0x7fffcd7303c0, ERROR Failed to invoke RPC call ""getFsStats"" on server ""localhost:9000"":
> RpcChannel.cpp: 783: HdfsRpcException: RPC channel to ""localhost:9000"" got protocol mismatch: RPC channel cannot parse response header.
> @ Hdfs::Internal::RpcChannelImpl::readOneResponse(bool)
> @ Hdfs::Internal::RpcChannelImpl::checkOneResponse()
> @ Hdfs::Internal::RpcChannelImpl::invokeInternal(std::__1::shared_ptr<Hdfs::Internal::RpcRemoteCall>)
> @ Hdfs::Internal::RpcChannelImpl:",,,,,,,,"SysLoggerMain","syslogger.c",518,
> 2018-06-20 11:21:56.771711 CST,,,p75703,th-848100416,,,,0,,,seg-10000,,,,,"LOG","00000","3rd party error log:
> :invoke(Hdfs::Internal::RpcCall const&)
> @ Hdfs::Internal::NamenodeImpl::invoke(Hdfs::Internal::RpcCall const&)
> @ Hdfs::Internal::NamenodeImpl::getFsStats()
> @ Hdfs::Internal::NamenodeProxy::getFsStats()
> @ Hdfs::Internal::FileSystemImpl::getFsStats()
> @ Hdfs::Internal::FileSystemImpl::connect()
> @ Hdfs::FileSystem::connect(char const*, char const*, char const*)
> @ Hdfs::FileSystem::connect(char const*)
> @ hdfsBuilderConnect
> @ gpfs_hdfs_connect
> @ HdfsConnect
> @ HdfsGetConnection
> @ HdfsGetFileBlockLocati",,,,,,,,"SysLoggerMain","syslogger.c",518,
> {code}
> Considering HDFS has a guc *"ipc.maximum.data.length"* to set it, HAWQ should also add a guc for it.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)