You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2019/11/13 18:27:16 UTC

[GitHub] [flink] azagrebin opened a new pull request #10180: [FLINK-14631] Account for netty direct allocations in direct memory limit (Netty Shuffle)

azagrebin opened a new pull request #10180: [FLINK-14631] Account for netty direct allocations in direct memory limit (Netty Shuffle)
URL: https://github.com/apache/flink/pull/10180
 
 
   ## What is the purpose of the change
   
   At the moment after [FLINK-13982](https://jira.apache.org/jira/browse/FLINK-14631), when we calculate JVM direct memory limit, we only account for memory segment network buffers but not for direct allocations from netty arenas in `org.apache.flink.runtime.io.network.netty.NettyBufferPool`. We should include netty arenas into shuffle memory calculations.
   
   The arenas should not be used too much after adding credit based control. They should be immediately copied into network buffers. Therefore, having number of arenas equal to the number of slots by default seems to be redundant and we can have just one 16Mb arena by default. Eventually, arena usage should become almost zero after #7368 except sending task events etc.
   
   Other off-heap memory uncontrollable usages in Flink dependencies can be addressed by framework off-heap memory config introduced in #10124.
   
   ## Brief change log
   
   *(for example:)*
     - *The TaskInfo is stored in the blob store on job creation time as a persistent artifact*
     - *Deployments RPC transmits only the blob storage reference*
     - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads (100MB)*
     - *Extended integration test for recovery after master (JobManager) failure*
     - *Added test that validates that TaskInfo is transferred only once across recoveries*
     - *Manually verified the change by running a 4 node cluser with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no)
     - The serializers: (yes / no / don't know)
     - The runtime per-record code paths (performance sensitive): (yes / no / don't know)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
     - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services