You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@samza.apache.org by "Chris Riccomini (JIRA)" <ji...@apache.org> on 2014/07/15 18:29:08 UTC
[jira] [Commented] (SAMZA-334) Need for asymmetric container config
[ https://issues.apache.org/jira/browse/SAMZA-334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14062271#comment-14062271 ]
Chris Riccomini commented on SAMZA-334:
---------------------------------------
Yea, I agree. I haven't thought this issue through in much detail, but I recognize the problem as well.
A simple straw-man proposal would be to simply allow per-container yarn.container.memory.mb, task.opts (for -Xmx), and yarn.container.cpu.cores. This would probably be done simply by assigning per-TaskName resources (after SAMZA-123 is committed, since it introduces TaskNames).
You could take this style one step farther, and allow more customization on these configs:
{code}
val CONTAINER_MAX_MEMORY_MB = "yarn.container.memory.mb"
val CONTAINER_MAX_CPU_CORES = "yarn.container.cpu.cores"
val CONTAINER_RETRY_COUNT = "yarn.container.retry.count"
val CONTAINER_RETRY_WINDOW_MS = "yarn.container.retry.window.ms"
{code}
We have already done this to some extent by introducing customizable AM configs:
{code}
val AM_JVM_OPTIONS = "yarn.am.opts"
val AM_JMX_ENABLED = "yarn.am.jmx.enabled"
val AM_CONTAINER_MAX_MEMORY_MB = "yarn.am.container.memory.mb"
val AM_POLL_INTERVAL_MS = "yarn.am.poll.interval.ms"
{code}
We should also think through how this would interact with auto-scaling, if we were to support such a feature. I think that you could build an auto-scaling feature if you had per-container configs (this ticket), combined with SAMZA-123's custom partitioning strategy, and a ConfigLog (also discussed in SAMZA-123) that triggered container restarts when the config is mutated.
> Need for asymmetric container config
> ------------------------------------
>
> Key: SAMZA-334
> URL: https://issues.apache.org/jira/browse/SAMZA-334
> Project: Samza
> Issue Type: Improvement
> Components: container
> Affects Versions: 0.8.0
> Reporter: Chinmay Soman
>
> The current (and upcoming) partitioning scheme(s) suggest that there might be a skew in the amount of data ingested and computation performed across different containers for a given Samza job. This directly affects the amount of resources required by a container - which today are completely symmetric.
> Case A] Partitioning on Kafka partitions
> For instance, consider a partitioner job which reads data from different Kafka topics (having different partition layouts). In this case, its possible that a lot of topics have a smaller number of Kafka partitions. Consequently the containers processing these partitions would need more resources than those responsible for the higher numbered partitions.
> Case B] Partitioning based on Kafka topics
> Even in this case, its very easy for some containers to be doing more work than others - leading to a skew in resource requirements.
> Today, the container config is based on the requirements for the worst (doing the most work) container. Needless to say, this leads to resource wastage. A better approach needs to consider what is the true requirement per container (instead of per job).
--
This message was sent by Atlassian JIRA
(v6.2#6252)