You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Varun Vasudev (JIRA)" <ji...@apache.org> on 2016/02/23 09:34:19 UTC

[jira] [Reopened] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel

     [ https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Varun Vasudev reopened HDFS-8578:
---------------------------------

This is causing compilation to fail on branch-2.7.

{code}
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hadoop-hdfs: Compilation failure: Compilation failure:
[ERROR] /Users/vvasudev/Workspace/apache/committer-rw/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java:[57,30] cannot find symbol
[ERROR] symbol:   class DFSUtilClient
[ERROR] location: package org.apache.hadoop.hdfs
[ERROR] /Users/vvasudev/Workspace/apache/committer-rw/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java:[444,17] cannot find symbol
[ERROR] symbol:   variable DFSUtilClient
[ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataStorage
[ERROR] /Users/vvasudev/Workspace/apache/committer-rw/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java:[493,17] cannot find symbol
[ERROR] symbol:   variable DFSUtilClient
[ERROR] location: class org.apache.hadoop.hdfs.server.datanode.DataStorage
[ERROR] -> [Help 1]
{code}

It looks like DFSUtilClient.java is missing in branch-2.7

{code}
HWxxxxx:hadoop vvasudev$ git branch
  YARN-2139
  YARN-3926
  branch-2
* branch-2.7
  branch-2.8
  trunk
HWxxxxx:hadoop vvasudev$ git pull
Already up-to-date.
HWxxxxx:hadoop vvasudev$  find . -name DFSUtilClient.java
HWxxxxx:hadoop vvasudev$
{code}

> On upgrade, Datanode should process all storage/data dirs in parallel
> ---------------------------------------------------------------------
>
>                 Key: HDFS-8578
>                 URL: https://issues.apache.org/jira/browse/HDFS-8578
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Raju Bairishetti
>            Assignee: Vinayakumar B
>            Priority: Critical
>             Fix For: 2.7.3
>
>         Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, HDFS-8578-03.patch, HDFS-8578-04.patch, HDFS-8578-05.patch, HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, HDFS-8578-09.patch, HDFS-8578-10.patch, HDFS-8578-11.patch, HDFS-8578-12.patch, HDFS-8578-13.patch, HDFS-8578-14.patch, HDFS-8578-15.patch, HDFS-8578-16.patch, HDFS-8578-17.patch, HDFS-8578-branch-2.6.0.patch, HDFS-8578-branch-2.7-001.patch, HDFS-8578-branch-2.7-002.patch, HDFS-8578-branch-2.7-003.patch, h8578_20151210.patch, h8578_20151211.patch, h8578_20151211b.patch, h8578_20151212.patch, h8578_20151213.patch, h8578_20160117.patch, h8578_20160128.patch, h8578_20160128b.patch, h8578_20160216.patch, h8578_20160218.patch
>
>
> Right now, during upgrades datanode is processing all the storage dirs sequentially. Assume it takes ~20 mins to process a single storage dir then  datanode which has ~10 disks will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>    for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>       doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>       assert getCTime() == nsInfo.getCTime() 
>           : "Data-node and name-node CTimes must be the same.";
>     }
> {code}
> It would save lots of time during major upgrades if datanode process all storagedirs/disks parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)