You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/10/28 08:43:01 UTC

[GitHub] [flink] dangshazi opened a new pull request, #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

dangshazi opened a new pull request, #21185:
URL: https://github.com/apache/flink/pull/21185

   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Azure Pipelines CI to do that following [this guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   * This pull request makes that HistoryServer support lazy unzip of archived job.  This feature reduce json file num in `historyserver.web.tmpdir` and provent node from run out of inodes*
   
   
   ## Brief change log
     - *HistoryServer downloads all Archived jobs into dir `archivedJobs`, but doesn't unzip those file*
     - *Unzip Archived job when `HistoryServerStaticFileServerHandler` find the related job request*
     - *Unzip Archived job in `Flink-HistoryServer-Unzipper` executor, return "processing msg" to web Fronend if not finish unzip task within specified timeout*
     - *Add `historyserver.archive.cached-jobs` to limit number of unzipped jobs in historyServer*
     
   
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
     - *Added integration tests for end-to-end history job visit *
   
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (no)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: ( no)
     - The serializers: (no )
     - The runtime per-record code paths (performance sensitive): (no)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no )
     - The S3 file system connector: ( no )
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (no)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099972976


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -152,10 +202,68 @@ public ArchiveEventType getType() {
         }
     }
 
+    private void initJobCache() {
+        initArchivedJobCache();
+        initUnzippedJobCache();
+    }
+
+    private void initArchivedJobCache() {
+        if (this.webArchivedDir.list() == null) {
+            LOG.info("No legacy archived jobs");
+            return;
+        }
+        Set<String> jobInLocal =
+                Arrays.stream(this.webArchivedDir.list()).collect(Collectors.toSet());
+        LOG.info("Reload left archived jobs : [{}]", String.join(",", jobInLocal));
+
+        for (HistoryServer.RefreshLocation refreshLocation : refreshDirs) {
+            Path refreshDir = refreshLocation.getPath();
+            try {
+                FileStatus[] jobArchives = listArchives(refreshLocation.getFs(), refreshDir);
+                Set<String> jobInRefreshLocation =
+                        Arrays.stream(jobArchives)
+                                .map(FileStatus::getPath)
+                                .map(Path::getName)
+                                .collect(Collectors.toSet());
+                jobInRefreshLocation.retainAll(jobInLocal);
+                this.cachedArchivesPerRefreshDirectory.get(refreshDir).addAll(jobInRefreshLocation);
+            } catch (IOException e) {
+                LOG.error("Failed to reload archivedJobs in {}.", refreshDir, refreshDir, e);
+            }
+        }
+
+        for (String jobId : Objects.requireNonNull(this.webArchivedDir.list())) {
+            this.cachedArchivesPerRefreshDirectory.forEach((path, archives) -> archives.add(jobId));
+        }

Review Comment:
   > Why do we want to add all local archives to caches of all refresh directoreis?
   
   HistoryServer should reload  Job files left by last HistoryServer when it starting according to the design doc.
   
   `cachedArchivesPerRefreshDirectory ` maintains the downloads job archives in {@link HistoryServerArchiveProcessor#webArchivedDir}. So `HistoryServer`  should 'add all local archives to caches of all refresh directoreis'



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1100910581


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -75,16 +86,26 @@ class HistoryServerArchiveFetcher {
 
     /** Possible job archive operations in history-server. */
     public enum ArchiveEventType {
-        /** Job archive was found in one refresh location and created in history server. */
+        /** Archived job file was found in one refresh location and downloaded in history server. */
+        DOWNLOADED,
+        /** Unzipped Job archive was reloaded. */
+        RELOADED,
+        /**
+         * Archived job file was unzipped and Unzipped Job archive was created in history server.
+         */
         CREATED,
+        /** Unzipped Job archive was deleted in history server. */
+        CLEANED,
         /**
-         * Job archive was deleted from one of refresh locations and deleted from history server.
+         * Archived job file and Unzipped Job archive was deleted from one of refresh locations and
+         * deleted from history server.
          */
-        DELETED
+        DELETED,
     }
 
     /** Representation of job archive event. */
     public static class ArchiveEvent {
+

Review Comment:
   `ArchiveEvent` is still used for testing



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099957500


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -102,6 +106,9 @@ public class HistoryServer {
 
     private static final Logger LOG = LoggerFactory.getLogger(HistoryServer.class);
     private static final ObjectMapper OBJECT_MAPPER = JacksonMapperFactory.createObjectMapper();
+    private static final String ARCHIVED_JOBS_DIR = "archivedJobs";
+    private static final String JOBS_DIR = "jobs";
+    private static final String OVERVIEWS_DIR = "overviews";

Review Comment:
   I have updated related comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099948579


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -75,16 +86,26 @@ class HistoryServerArchiveFetcher {
 
     /** Possible job archive operations in history-server. */
     public enum ArchiveEventType {
-        /** Job archive was found in one refresh location and created in history server. */
+        /** Archived job file was found in one refresh location and downloaded in history server. */
+        DOWNLOADED,
+        /** Unzipped Job archive was reloaded. */
+        RELOADED,

Review Comment:
   I have updated related comments.
   
   - DOWNLOADED: Archived job file was found in one refresh location() and downloaded into {@link HistoryServerArchiveProcessor#webArchivedDir} on history server.
   - RELOADED: HistoryServer reloads Unzipped Job files left by last HistoryServer when it starting
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1012599965


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -166,10 +194,40 @@ private void respondWithFile(ChannelHandlerContext ctx, HttpRequest request, Str
                             }
                         }
                     }
+                    // try to unzip archived files
+                    if (enableUnzip && !success) {
+                        // extract jobid from requestPath
+                        String jobId = extractJobId(requestPath);
+                        if (!StringUtils.isNullOrWhitespaceOnly(jobId)) {
+                            // submit unzip Task and get future
+                            Boolean unzipped =
+                                    CompletableFuture.supplyAsync(
+                                                    unzipTask.apply(jobId), unzipExecutor)
+                                            .get(UNZIP_TIMEOUT, TimeUnit.SECONDS);
+                            if (unzipped && file.exists()) {
+                                success = true;
+                            }

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1013573354


##########
flink-core/src/main/java/org/apache/flink/configuration/HistoryServerOptions.java:
##########
@@ -143,5 +143,16 @@ public class HistoryServerOptions {
                                             code("IllegalConfigurationException"))
                                     .build());
 
+    public static final ConfigOption<Integer> HISTORY_SERVER_CACHED_JOBS =
+            key("historyserver.archive.cached-jobs")
+                    .intType()
+                    .defaultValue(500)
+                    .withDescription(

Review Comment:
   Refactor



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] xintongsong commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
xintongsong commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1033366920


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -340,15 +364,24 @@ void stop() {
                     LOG.warn("Error while shutting down WebFrontendBootstrap.", t);
                 }
 
-                ExecutorUtils.gracefulShutdown(1, TimeUnit.SECONDS, executor);
-
+                ExecutorUtils.gracefulShutdown(1, TimeUnit.MINUTES, fetcherExecutor, unzipExecutor);
                 try {
-                    LOG.info("Removing web dashboard root cache directory {}", webDir);
-                    FileUtils.deleteDirectory(webDir);
+                    LOG.info("Removing web dashboard cached WebFrontend files in dir {}", webDir);
+                    for (java.nio.file.Path path : FileUtils.listDirectory(webDir.toPath())) {
+                        if ((Files.isDirectory(path)
+                                        && path.toFile().getName().equals(ARCHIVED_JOBS_DIR))
+                                || (Files.isDirectory(path)
+                                        && path.toFile().getName().equals(JOBS_DIR))
+                                || (Files.isDirectory(path)
+                                        && path.toFile().getName().equals(OVERVIEWS_DIR))) {
+                            continue;
+                        }

Review Comment:
   Why do we want to skip cleaning the cache files on termination?



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -116,9 +123,17 @@ public class HistoryServer {
     private WebFrontendBootstrap netty;
 
     private final long refreshIntervalMillis;
-    private final ScheduledExecutorService executor =
+    private final ScheduledExecutorService fetcherExecutor =
             Executors.newSingleThreadScheduledExecutor(
                     new ExecutorThreadFactory("Flink-HistoryServer-ArchiveFetcher"));
+    private final ExecutorService unzipExecutor =
+            new ThreadPoolExecutor(
+                    8,
+                    32,

Review Comment:
   I wonder if it makes sense to make these configurable.



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -75,16 +86,26 @@ class HistoryServerArchiveFetcher {
 
     /** Possible job archive operations in history-server. */
     public enum ArchiveEventType {
-        /** Job archive was found in one refresh location and created in history server. */
+        /** Archived job file was found in one refresh location and downloaded in history server. */
+        DOWNLOADED,
+        /** Unzipped Job archive was reloaded. */
+        RELOADED,

Review Comment:
   What are the differences between downloaded and reloaded?



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -102,6 +106,9 @@ public class HistoryServer {
 
     private static final Logger LOG = LoggerFactory.getLogger(HistoryServer.class);
     private static final ObjectMapper OBJECT_MAPPER = JacksonMapperFactory.createObjectMapper();
+    private static final String ARCHIVED_JOBS_DIR = "archivedJobs";
+    private static final String JOBS_DIR = "jobs";
+    private static final String OVERVIEWS_DIR = "overviews";

Review Comment:
   It was unclear to me what are these directories for, until diving deeper into the codes. The readability can be improved by documenting them explicitly. 



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -152,10 +202,68 @@ public ArchiveEventType getType() {
         }
     }
 
+    private void initJobCache() {
+        initArchivedJobCache();
+        initUnzippedJobCache();
+    }
+
+    private void initArchivedJobCache() {
+        if (this.webArchivedDir.list() == null) {
+            LOG.info("No legacy archived jobs");
+            return;
+        }
+        Set<String> jobInLocal =
+                Arrays.stream(this.webArchivedDir.list()).collect(Collectors.toSet());
+        LOG.info("Reload left archived jobs : [{}]", String.join(",", jobInLocal));
+
+        for (HistoryServer.RefreshLocation refreshLocation : refreshDirs) {
+            Path refreshDir = refreshLocation.getPath();
+            try {
+                FileStatus[] jobArchives = listArchives(refreshLocation.getFs(), refreshDir);
+                Set<String> jobInRefreshLocation =
+                        Arrays.stream(jobArchives)
+                                .map(FileStatus::getPath)
+                                .map(Path::getName)
+                                .collect(Collectors.toSet());
+                jobInRefreshLocation.retainAll(jobInLocal);
+                this.cachedArchivesPerRefreshDirectory.get(refreshDir).addAll(jobInRefreshLocation);
+            } catch (IOException e) {
+                LOG.error("Failed to reload archivedJobs in {}.", refreshDir, refreshDir, e);
+            }
+        }
+
+        for (String jobId : Objects.requireNonNull(this.webArchivedDir.list())) {
+            this.cachedArchivesPerRefreshDirectory.forEach((path, archives) -> archives.add(jobId));
+        }

Review Comment:
   Why do we want to add all local archives to caches of all refresh directoreis?



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -75,16 +86,26 @@ class HistoryServerArchiveFetcher {
 
     /** Possible job archive operations in history-server. */
     public enum ArchiveEventType {
-        /** Job archive was found in one refresh location and created in history server. */
+        /** Archived job file was found in one refresh location and downloaded in history server. */
+        DOWNLOADED,
+        /** Unzipped Job archive was reloaded. */
+        RELOADED,
+        /**
+         * Archived job file was unzipped and Unzipped Job archive was created in history server.
+         */
         CREATED,
+        /** Unzipped Job archive was deleted in history server. */
+        CLEANED,
         /**
-         * Job archive was deleted from one of refresh locations and deleted from history server.
+         * Archived job file and Unzipped Job archive was deleted from one of refresh locations and
+         * deleted from history server.
          */
-        DELETED
+        DELETED,
     }
 
     /** Representation of job archive event. */
     public static class ArchiveEvent {
+

Review Comment:
   It seems this event is no long being used. We probably can just get rid of it.



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -103,47 +124,76 @@ public ArchiveEventType getType() {
     }
 
     private static final Logger LOG = LoggerFactory.getLogger(HistoryServerArchiveFetcher.class);
-
     private static final JsonFactory jacksonFactory = new JsonFactory();
     private static final ObjectMapper mapper = JacksonMapperFactory.createObjectMapper();
 
     private static final String JSON_FILE_ENDING = ".json";
 
     private final List<HistoryServer.RefreshLocation> refreshDirs;
+    private final List<ArchiveEvent> events;
     private final Consumer<ArchiveEvent> jobArchiveEventListener;
     private final boolean processExpiredArchiveDeletion;
     private final boolean processBeyondLimitArchiveDeletion;
     private final int maxHistorySize;
 
     /** Cache of all available jobs identified by their id. */
     private final Map<Path, Set<String>> cachedArchivesPerRefreshDirectory;
+    /** Cache of all unzipped jobs. key: jobID */
+    private final LoadingCache<String, Boolean> unzippedJobCache;
 
     private final File webDir;
     private final File webJobDir;
+    private final File webArchivedDir;
     private final File webOverviewDir;
 
     HistoryServerArchiveFetcher(
             List<HistoryServer.RefreshLocation> refreshDirs,
             File webDir,
             Consumer<ArchiveEvent> jobArchiveEventListener,
             boolean cleanupExpiredArchives,
-            int maxHistorySize)
+            int maxHistorySize,
+            int maxCachedJobSize)
             throws IOException {
         this.refreshDirs = checkNotNull(refreshDirs);
+        this.events = Collections.synchronizedList(new ArrayList<>());
         this.jobArchiveEventListener = jobArchiveEventListener;
         this.processExpiredArchiveDeletion = cleanupExpiredArchives;
         this.maxHistorySize = maxHistorySize;
         this.processBeyondLimitArchiveDeletion = this.maxHistorySize > 0;
         this.cachedArchivesPerRefreshDirectory = new HashMap<>();
+        this.unzippedJobCache =
+                CacheBuilder.newBuilder()
+                        .concurrencyLevel(10)
+                        .initialCapacity(10)
+                        .maximumSize(maxCachedJobSize)
+                        .expireAfterAccess(7L, TimeUnit.DAYS)
+                        .removalListener(
+                                notification -> {
+                                    LOG.info(
+                                            "Job:{} is removed from cache with reason [{}]",
+                                            notification.getKey(),
+                                            notification.getCause());
+                                    deleteJobFiles((String) notification.getKey());
+                                })
+                        .build(
+                                new CacheLoader<String, Boolean>() {
+                                    @Override
+                                    public Boolean load(String s) throws IOException {
+                                        return unzipArchive(s);
+                                    }
+                                });
         for (HistoryServer.RefreshLocation refreshDir : refreshDirs) {
             cachedArchivesPerRefreshDirectory.put(refreshDir.getPath(), new HashSet<>());
         }
         this.webDir = checkNotNull(webDir);
+        this.webArchivedDir = new File(webDir, "archivedJobs");
+        Files.createDirectories(webArchivedDir.toPath());
         this.webJobDir = new File(webDir, "jobs");
         Files.createDirectories(webJobDir.toPath());
         this.webOverviewDir = new File(webDir, "overviews");
         Files.createDirectories(webOverviewDir.toPath());

Review Comment:
   These should refer to the same constants as in `HistoryServer`, rather than string literals.



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -103,47 +124,76 @@ public ArchiveEventType getType() {
     }
 
     private static final Logger LOG = LoggerFactory.getLogger(HistoryServerArchiveFetcher.class);
-
     private static final JsonFactory jacksonFactory = new JsonFactory();
     private static final ObjectMapper mapper = JacksonMapperFactory.createObjectMapper();
 
     private static final String JSON_FILE_ENDING = ".json";
 
     private final List<HistoryServer.RefreshLocation> refreshDirs;
+    private final List<ArchiveEvent> events;
     private final Consumer<ArchiveEvent> jobArchiveEventListener;
     private final boolean processExpiredArchiveDeletion;
     private final boolean processBeyondLimitArchiveDeletion;
     private final int maxHistorySize;
 
     /** Cache of all available jobs identified by their id. */
     private final Map<Path, Set<String>> cachedArchivesPerRefreshDirectory;
+    /** Cache of all unzipped jobs. key: jobID */
+    private final LoadingCache<String, Boolean> unzippedJobCache;

Review Comment:
   What does the boolean value stand for?



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -166,10 +195,38 @@ private void respondWithFile(ChannelHandlerContext ctx, HttpRequest request, Str
                             }
                         }
                     }
+                    // try to unzip archived files
+                    if (enableUnzip && !success) {
+                        // extract jobid from requestPath
+                        String jobId = extractJobId(requestPath);
+                        if (!StringUtils.isNullOrWhitespaceOnly(jobId)) {
+                            // submit unzip Task and get future
+                            Boolean unzipped =
+                                    CompletableFuture.supplyAsync(
+                                                    unzipTask.apply(jobId), unzipExecutor)

Review Comment:
   It's weird some unzipping happens on the unzipExecutor, while others (those in the fetcher) are not.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] 1996fanrui commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
1996fanrui commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1008667842


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -266,6 +324,18 @@ private void respondWithFile(ChannelHandlerContext ctx, HttpRequest request, Str
         }
     }
 
+    static String extractJobId(String requestPath) {
+        if (StringUtils.isNullOrWhitespaceOnly(requestPath)
+                || !requestPath.matches("^/jobs/.{32}\\.json$")) {
+            return null;
+        }
+        String secondPath = requestPath.split("/")[2];
+        if (StringUtils.isNullOrWhitespaceOnly(secondPath) || secondPath.length() < 32) {
+            return null;
+        }
+        return secondPath.substring(0, 32);

Review Comment:
   Too many 32, the 32 should be defined as constant.



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -166,10 +194,40 @@ private void respondWithFile(ChannelHandlerContext ctx, HttpRequest request, Str
                             }
                         }
                     }
+                    // try to unzip archived files
+                    if (enableUnzip && !success) {
+                        // extract jobid from requestPath
+                        String jobId = extractJobId(requestPath);
+                        if (!StringUtils.isNullOrWhitespaceOnly(jobId)) {
+                            // submit unzip Task and get future
+                            Boolean unzipped =
+                                    CompletableFuture.supplyAsync(
+                                                    unzipTask.apply(jobId), unzipExecutor)
+                                            .get(UNZIP_TIMEOUT, TimeUnit.SECONDS);
+                            if (unzipped && file.exists()) {
+                                success = true;
+                            }

Review Comment:
   The code can be simplified to: `success = unzipped && file.exists()`



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -93,13 +102,31 @@
     private static final Logger LOG =
             LoggerFactory.getLogger(HistoryServerStaticFileServerHandler.class);
 
+    private static final long UNZIP_TIMEOUT = 10L;

Review Comment:
   How about `UNZIP_TIMEOUT_SECOND` ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1013573243


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -266,6 +324,18 @@ private void respondWithFile(ChannelHandlerContext ctx, HttpRequest request, Str
         }
     }
 
+    static String extractJobId(String requestPath) {
+        if (StringUtils.isNullOrWhitespaceOnly(requestPath)
+                || !requestPath.matches("^/jobs/.{32}\\.json$")) {
+            return null;
+        }
+        String secondPath = requestPath.split("/")[2];
+        if (StringUtils.isNullOrWhitespaceOnly(secondPath) || secondPath.length() < 32) {
+            return null;
+        }
+        return secondPath.substring(0, 32);

Review Comment:
   Done



##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -93,13 +102,31 @@
     private static final Logger LOG =
             LoggerFactory.getLogger(HistoryServerStaticFileServerHandler.class);
 
+    private static final long UNZIP_TIMEOUT = 10L;

Review Comment:
   Renamed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1100958849


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandler.java:
##########
@@ -166,10 +195,38 @@ private void respondWithFile(ChannelHandlerContext ctx, HttpRequest request, Str
                             }
                         }
                     }
+                    // try to unzip archived files
+                    if (enableUnzip && !success) {
+                        // extract jobid from requestPath
+                        String jobId = extractJobId(requestPath);
+                        if (!StringUtils.isNullOrWhitespaceOnly(jobId)) {
+                            // submit unzip Task and get future
+                            Boolean unzipped =
+                                    CompletableFuture.supplyAsync(
+                                                    unzipTask.apply(jobId), unzipExecutor)

Review Comment:
    All unzipping happens in the `unzipExecutor`.
   
   `HistoryServerArchiveProcessor` only provide logic of unzip. But unzip task is submit to `unzipExecutor`
    
    



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099945870


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -340,15 +364,24 @@ void stop() {
                     LOG.warn("Error while shutting down WebFrontendBootstrap.", t);
                 }
 
-                ExecutorUtils.gracefulShutdown(1, TimeUnit.SECONDS, executor);
-
+                ExecutorUtils.gracefulShutdown(1, TimeUnit.MINUTES, fetcherExecutor, unzipExecutor);
                 try {
-                    LOG.info("Removing web dashboard root cache directory {}", webDir);
-                    FileUtils.deleteDirectory(webDir);
+                    LOG.info("Removing web dashboard cached WebFrontend files in dir {}", webDir);
+                    for (java.nio.file.Path path : FileUtils.listDirectory(webDir.toPath())) {
+                        if ((Files.isDirectory(path)
+                                        && path.toFile().getName().equals(ARCHIVED_JOBS_DIR))
+                                || (Files.isDirectory(path)
+                                        && path.toFile().getName().equals(JOBS_DIR))
+                                || (Files.isDirectory(path)
+                                        && path.toFile().getName().equals(OVERVIEWS_DIR))) {
+                            continue;
+                        }

Review Comment:
   In our case,  `HistoryServer`  is deployed on a fixed machine
   
   It takes extra time to download all job archives when starting `HistoryServer`  on the same node. 
    
   So I think It's better for `HistoryServer` to skip cleaning the cache files on termination.
   
   `HistoryServer` will reload  job archives and Unzipped Job files left by last HistoryServer when HistoryServer starting in `org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveProcessor#initJobCache` 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] reswqa commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
reswqa commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1008996802


##########
flink-core/src/main/java/org/apache/flink/configuration/HistoryServerOptions.java:
##########
@@ -143,5 +143,16 @@ public class HistoryServerOptions {
                                             code("IllegalConfigurationException"))
                                     .build());
 
+    public static final ConfigOption<Integer> HISTORY_SERVER_CACHED_JOBS =
+            key("historyserver.archive.cached-jobs")
+                    .intType()
+                    .defaultValue(500)
+                    .withDescription(

Review Comment:
   Why not use `withDescription(String description)`  directly.



##########
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandlerTest.java:
##########
@@ -30,12 +31,33 @@
 
 import java.nio.file.Files;
 import java.nio.file.Path;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
 
 import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests for the HistoryServerStaticFileServerHandler. */
 class HistoryServerStaticFileServerHandlerTest {
 
+    @Test
+    void testExtractJobId() {

Review Comment:
   we should migrate tests involved in this pr to JUnit5 and AssertJ.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099976467


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -103,47 +124,76 @@ public ArchiveEventType getType() {
     }
 
     private static final Logger LOG = LoggerFactory.getLogger(HistoryServerArchiveFetcher.class);
-
     private static final JsonFactory jacksonFactory = new JsonFactory();
     private static final ObjectMapper mapper = JacksonMapperFactory.createObjectMapper();
 
     private static final String JSON_FILE_ENDING = ".json";
 
     private final List<HistoryServer.RefreshLocation> refreshDirs;
+    private final List<ArchiveEvent> events;
     private final Consumer<ArchiveEvent> jobArchiveEventListener;
     private final boolean processExpiredArchiveDeletion;
     private final boolean processBeyondLimitArchiveDeletion;
     private final int maxHistorySize;
 
     /** Cache of all available jobs identified by their id. */
     private final Map<Path, Set<String>> cachedArchivesPerRefreshDirectory;
+    /** Cache of all unzipped jobs. key: jobID */
+    private final LoadingCache<String, Boolean> unzippedJobCache;

Review Comment:
   The boolean value in `unzippedJobCache ` means whether the unzip was successful



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099954380


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -116,9 +123,17 @@ public class HistoryServer {
     private WebFrontendBootstrap netty;
 
     private final long refreshIntervalMillis;
-    private final ScheduledExecutorService executor =
+    private final ScheduledExecutorService fetcherExecutor =
             Executors.newSingleThreadScheduledExecutor(
                     new ExecutorThreadFactory("Flink-HistoryServer-ArchiveFetcher"));
+    private final ExecutorService unzipExecutor =
+            new ThreadPoolExecutor(
+                    8,
+                    32,

Review Comment:
   I configure those thread number according to the frequency of `HistoryServer`  access
   
   What do you suggest?  Maybe I should parameterize those thread configurations



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099965280


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -152,10 +202,68 @@ public ArchiveEventType getType() {
         }
     }
 
+    private void initJobCache() {
+        initArchivedJobCache();
+        initUnzippedJobCache();
+    }
+
+    private void initArchivedJobCache() {
+        if (this.webArchivedDir.list() == null) {
+            LOG.info("No legacy archived jobs");
+            return;
+        }
+        Set<String> jobInLocal =
+                Arrays.stream(this.webArchivedDir.list()).collect(Collectors.toSet());
+        LOG.info("Reload left archived jobs : [{}]", String.join(",", jobInLocal));
+
+        for (HistoryServer.RefreshLocation refreshLocation : refreshDirs) {
+            Path refreshDir = refreshLocation.getPath();
+            try {
+                FileStatus[] jobArchives = listArchives(refreshLocation.getFs(), refreshDir);
+                Set<String> jobInRefreshLocation =
+                        Arrays.stream(jobArchives)
+                                .map(FileStatus::getPath)
+                                .map(Path::getName)
+                                .collect(Collectors.toSet());
+                jobInRefreshLocation.retainAll(jobInLocal);
+                this.cachedArchivesPerRefreshDirectory.get(refreshDir).addAll(jobInRefreshLocation);
+            } catch (IOException e) {
+                LOG.error("Failed to reload archivedJobs in {}.", refreshDir, refreshDir, e);
+            }
+        }
+
+        for (String jobId : Objects.requireNonNull(this.webArchivedDir.list())) {
+            this.cachedArchivesPerRefreshDirectory.forEach((path, archives) -> archives.add(jobId));
+        }

Review Comment:
   > Thanks for opening this PR, @dangshazi. I apologize for keeping you waiting so long before reviewing this.
   > 
   > I have left some comments. I think the biggest problem of the current PR is the readability. Some key fields / terminologies are declared / used without explanations, making the codes hard to understand. I'm not sure whether I have fully understand the logics, thus cannot decide whether the changes are correct.
   
   I have updated related comments and added the Design doc: [History Server support lazy unzip](https://docs.google.com/document/d/1o7YgXhHJxsObkduHLsr4YSwS8T-mo-tzLWwpsRceMNc/edit?usp=sharing) in this PR



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099978500


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -103,47 +124,76 @@ public ArchiveEventType getType() {
     }
 
     private static final Logger LOG = LoggerFactory.getLogger(HistoryServerArchiveFetcher.class);
-
     private static final JsonFactory jacksonFactory = new JsonFactory();
     private static final ObjectMapper mapper = JacksonMapperFactory.createObjectMapper();
 
     private static final String JSON_FILE_ENDING = ".json";
 
     private final List<HistoryServer.RefreshLocation> refreshDirs;
+    private final List<ArchiveEvent> events;
     private final Consumer<ArchiveEvent> jobArchiveEventListener;
     private final boolean processExpiredArchiveDeletion;
     private final boolean processBeyondLimitArchiveDeletion;
     private final int maxHistorySize;
 
     /** Cache of all available jobs identified by their id. */
     private final Map<Path, Set<String>> cachedArchivesPerRefreshDirectory;
+    /** Cache of all unzipped jobs. key: jobID */
+    private final LoadingCache<String, Boolean> unzippedJobCache;

Review Comment:
   > What does the boolean value stand for?
   
   The boolean value in `unzippedJobCache ` means whether the unzip was successful
   
   I just want to use removalListener function of `Guava cache`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] flinkbot commented on pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
flinkbot commented on PR #21185:
URL: https://github.com/apache/flink/pull/21185#issuecomment-1294717524

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "0db199c2e4388561a570aab07a9f4d93716f98eb",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "0db199c2e4388561a570aab07a9f4d93716f98eb",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 0db199c2e4388561a570aab07a9f4d93716f98eb UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] reswqa commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
reswqa commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1013586123


##########
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandlerTest.java:
##########
@@ -30,12 +31,33 @@
 
 import java.nio.file.Files;
 import java.nio.file.Path;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
 
 import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests for the HistoryServerStaticFileServerHandler. */
 class HistoryServerStaticFileServerHandlerTest {
 
+    @Test
+    void testExtractJobId() {

Review Comment:
   Sorry, I didn't see it clearly.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by GitBox <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1013573867


##########
flink-runtime-web/src/test/java/org/apache/flink/runtime/webmonitor/history/HistoryServerStaticFileServerHandlerTest.java:
##########
@@ -30,12 +31,33 @@
 
 import java.nio.file.Files;
 import java.nio.file.Path;
+import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
+import java.util.function.Supplier;
 
 import static org.assertj.core.api.Assertions.assertThat;
 
 /** Tests for the HistoryServerStaticFileServerHandler. */
 class HistoryServerStaticFileServerHandlerTest {
 
+    @Test
+    void testExtractJobId() {

Review Comment:
   I don't  get it, I used `junit.jupiter.api`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099948579


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServerArchiveFetcher.java:
##########
@@ -75,16 +86,26 @@ class HistoryServerArchiveFetcher {
 
     /** Possible job archive operations in history-server. */
     public enum ArchiveEventType {
-        /** Job archive was found in one refresh location and created in history server. */
+        /** Archived job file was found in one refresh location and downloaded in history server. */
+        DOWNLOADED,
+        /** Unzipped Job archive was reloaded. */
+        RELOADED,

Review Comment:
   I have updated related Annotation.
   
   - DOWNLOADED: Archived job file was found in one refresh location() and downloaded into {@link HistoryServerArchiveProcessor#webArchivedDir} on history server.
   - RELOADED: HistoryServer reloads Unzipped Job files left by last HistoryServer when it starting
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] dangshazi commented on a diff in pull request #21185: [FLINK-28643][runtime-web] HistoryServer support lazy unzip

Posted by "dangshazi (via GitHub)" <gi...@apache.org>.
dangshazi commented on code in PR #21185:
URL: https://github.com/apache/flink/pull/21185#discussion_r1099945870


##########
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/history/HistoryServer.java:
##########
@@ -340,15 +364,24 @@ void stop() {
                     LOG.warn("Error while shutting down WebFrontendBootstrap.", t);
                 }
 
-                ExecutorUtils.gracefulShutdown(1, TimeUnit.SECONDS, executor);
-
+                ExecutorUtils.gracefulShutdown(1, TimeUnit.MINUTES, fetcherExecutor, unzipExecutor);
                 try {
-                    LOG.info("Removing web dashboard root cache directory {}", webDir);
-                    FileUtils.deleteDirectory(webDir);
+                    LOG.info("Removing web dashboard cached WebFrontend files in dir {}", webDir);
+                    for (java.nio.file.Path path : FileUtils.listDirectory(webDir.toPath())) {
+                        if ((Files.isDirectory(path)
+                                        && path.toFile().getName().equals(ARCHIVED_JOBS_DIR))
+                                || (Files.isDirectory(path)
+                                        && path.toFile().getName().equals(JOBS_DIR))
+                                || (Files.isDirectory(path)
+                                        && path.toFile().getName().equals(OVERVIEWS_DIR))) {
+                            continue;
+                        }

Review Comment:
   > Why do we want to skip cleaning the cache files on termination?
   
   In our case,  `HistoryServer`  is deployed on a fixed machine
   
   It takes extra time to download all job archives when starting `HistoryServer`  on the same node. 
    
   So I think It's better for `HistoryServer` to skip cleaning the cache files on termination.
   
   `HistoryServer` will reload  job archives and Unzipped Job files left by last HistoryServer when HistoryServer starting in `org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveProcessor#initJobCache` 
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org