You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/08/16 05:07:00 UTC
[jira] [Created] (HADOOP-15679) ShutdownHookManager shutdown time
needs to be configurable & extended
Steve Loughran created HADOOP-15679:
---------------------------------------
Summary: ShutdownHookManager shutdown time needs to be configurable & extended
Key: HADOOP-15679
URL: https://issues.apache.org/jira/browse/HADOOP-15679
Project: Hadoop Common
Issue Type: Bug
Components: util
Affects Versions: 3.0.0, 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
HADOOP-12950 added a timeout on shutdowns to avoid problems with hanging shutdowns. But the timeout is too short for applications where a large flush of data is needed on shutdown.
A key example of this is Spark apps which save their history to object stores, where the file close() call triggers an upload of the final local cached block of data (could be 32+MB), and then execute the final mutipart commit.
Proposed
# make the default sleep time 30s, not 10s
# make it configurable with a time duration property (with minimum time of 1s.?)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org