You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cassandra.apache.org by "Satoshi Konno (JIRA)" <ji...@apache.org> on 2016/07/04 12:29:10 UTC
[jira] [Commented] (CASSANDRA-11303) New inbound throughput
parameters for streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-11303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15361243#comment-15361243 ]
Satoshi Konno commented on CASSANDRA-11303:
-------------------------------------------
Hi [~pauloricardomg],
I have uploaded a new patch which includes the following changes.
{quote}
- Rename StorageService.(get/set)(Inbound/Outbound)StreamThroughputMbPerSec to be consistent with DatabaseDescriptor.(get/set)StreamThroughput(Inbound/Outbound)MegabitsPerSec.
- Add stream_throughput_inbound_megabits_per_sec to inter_dc_stream_throughput* property description on cassandra.yaml
- Add deprecation javadoc to deprecated StorageServiceMBean methods (see forceRepairAsync for example)
- remove additional spaces from throw new IOException("CF " + cfId + " was dropped during streaming");
- It seems your IDE removed static qualifiers on StreamReader constants and static methods
- Keep old methods on NodeProbe (add deprecation flag) in case they're used externally
{quote}
I tried to update the throughput values to the ongoing streams dynamically, but I suppose that it is hard to
do using the singleton pattern because the streams are generated based on the peer address.
I was going to manage a map which has all ongoing peer streams to update the throughput values dynamically,
but I think that the method is unreasonable.
In short, I hope that this patch will be accepted without the dynamic throughput changes.
Please let me know if you have any suggestions.
> New inbound throughput parameters for streaming
> -----------------------------------------------
>
> Key: CASSANDRA-11303
> URL: https://issues.apache.org/jira/browse/CASSANDRA-11303
> Project: Cassandra
> Issue Type: New Feature
> Components: Configuration
> Reporter: Satoshi Konno
> Priority: Minor
> Attachments: 11303_inbound_limit_debug_20160419.log, 11303_inbound_nolimit_debug_20160419.log, 11303_inbound_patch_for_trunk_20160419.diff, 11303_inbound_patch_for_trunk_20160525.diff, 200vs40inboundstreamthroughput.png, cassandra_inbound_stream.diff
>
>
> Hi,
> To specify stream throughputs of a node more clearly, I would like to add the following new inbound parameters like existing outbound parameters in the cassandra.yaml.
> - stream_throughput_inbound_megabits_per_sec
> - inter_dc_stream_throughput_outbound_megabits_per_sec
> We use only the existing outbound parameters now, but it is difficult to control the total throughputs of a node. In our production network, some critical alerts occurs when a node exceed the specified total throughput which is the sum of the input and output throughputs.
> In our operation of Cassandra, the alerts occurs during the bootstrap or repair processing when a new node is added. In the worst case, we have to stop the operation of the exceed node.
> I have attached the patch under consideration. I would like to add a new limiter class, StreamInboundRateLimiter, and use the limiter class in StreamDeserializer class. I use Row::dataSize( )to get the input throughput in StreamDeserializer::newPartition(), but I am not sure whether the dataSize() returns the correct data size.
> Can someone please tell me how to do it ?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)