You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2020/04/30 07:03:43 UTC

[GitHub] [hadoop] umamaheswararao opened a new pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

umamaheswararao opened a new pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988


   https://issues.apache.org/jira/browse/HDFS-15305
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419067506



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {
+      Assert.assertEquals(testString, lViewIs.readUTF());
+    } finally {
+      lViewIs.close();
+    }
+  }
+
+  /**
+   * Tests create file and delete file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsCreateAndDelete() throws Exception {
+    LOG.info("Starting testLocalFsCreateAndDelete");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path testPath = new Path(mountURI.toString() + "/lfsroot/test");
+      lViewFS.create(testPath);

Review comment:
       Let me use this other method which closes itself. I dont need stream here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419176899



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 
+       * resolved again to ViewFsOverloadScheme as fs.<scheme>.impl points to
+       * ViewFsOverloadScheme. So, below method will initialize the
+       * fs.viewfs.overload.scheme.target.<scheme>.impl. Other schemes can
+       * follow fs.newInstance
+       */
+      @Override
+      public FileSystem createFs(URI uri, Configuration conf)
+          throws IOException {
+        if (uri.getScheme().equals(myScheme)) {

Review comment:
       For other schemes which are not matching will be able to get it resolved by FileSystem itself right?
   When target uri is scheme is same as OverloadScheme uri sheme, then only we need to bypass to avoid looping. Other cases, FileSystem#get or FileSystem#newInstance will get the right instance. Am I missing?
   Since your exposed uri scheme occupied with OverloadScheme impl, we need that additional config to get actual impl. This was the idea of that configuration.
   
   >Currently, the override only works for one scheme.  
   
   User is going to use one scheme from out side right per OverloadScheme instance right?
   Because your OverloadScheme init will take one uri to initialize, That will be user initialized uri. Rest all will go as mapping uris. 
   
   example: inited uri is hdfs://xyz:9000
                                       mapping target uris are /target1 -> hdfs://target1:9000, /target2 -> hdfs://target2:9000, /target3 -> s3a://bucket1/target3
   
   Here when getting target fs: we would check if scheme matches with inited. Here hdfs was inited. So, for hdfs scheme, FileSystem#get resolution will always to OverloadSchem impl. To get actual impl we use the FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY to initialize target fs. Other impls should be able to resolved by FileSystem#get.
   Is your use case is for configuring Overload scheme for all fs.<scheme>.impl in same client process? example in same client process, you would configure, fs.hdfs.impl=ViewFSOverloadScheme and fs.s3a.impl=ViewFSOverloadScheme ?
   Before changing it would be good for me to understand your use case here. -Thanks
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] virajith commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
virajith commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418375437



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
##########
@@ -42,4 +42,11 @@
    */
   public static final URI VIEWFS_URI = URI.create("viewfs:///");
   public static final String VIEWFS_SCHEME = "viewfs";
+
+  public static final String VIEWFS_OVERLOAD_SCHEME_KEY =
+      "fs.viewfs.overload.scheme";
+  public static final String VIEWFS_OVERLOAD_SCHEME_DEFAULT = "hdfs";
+  public static final String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY =
+      "fs.viewfs.overload.scheme.target.%s.impl";
+  public static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";

Review comment:
       Why add this here? This is just used in tests right?

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple

Review comment:
       nits:
   "object is" -> "objective here is to handle"
   "a multiple" -> multiple.
   

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##########
@@ -302,6 +320,11 @@ protected FileSystem getTargetFileSystem(final String settings,
     }
   }
 
+  protected void superFSInit(final URI theUri, final Configuration conf)

Review comment:
       javadoc for this method? This seems a bit hacky but I understand the need. I think initializeSuperFs is a slightly  better name but don't have a strong opinion here.

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);

Review comment:
       Is there a reason why the scheme of this FS needs to be set using VIEWFS_OVERLOAD_SCHEME_KEY? I would have assumed that we use "view://" similar to ViewFileSystem. Will we have valid use cases where fs.getScheme() needs to match "hdfs"?

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************

Review comment:
       The explanation here is a little confusing. I think it's easier to provide an example on how it works rather than try to write about it.

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 
+       * resolved again to ViewFsOverloadScheme as fs.<scheme>.impl points to
+       * ViewFsOverloadScheme. So, below method will initialize the
+       * fs.viewfs.overload.scheme.target.<scheme>.impl. Other schemes can
+       * follow fs.newInstance
+       */
+      @Override
+      public FileSystem createFs(URI uri, Configuration conf)
+          throws IOException {
+        if (uri.getScheme().equals(myScheme)) {

Review comment:
       Currently, the override only works for one scheme. This alone can prevent the circular dependency. However, should we consider calling createFileSystem for all schemes that have FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY set? 

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);

Review comment:
       What if this is not set? No check for this currently.

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 

Review comment:
       nit: "into loop" -> "in an infinite loop as the target"

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system

Review comment:
       nit: "with mounted target fs scheme" -> "the scheme of the target fs"
   "file system should be created" -> "the target file system .."

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)

Review comment:
       If we add a protected method `getFsCreator(String scheme)` in `ViewFileSystem` and override it this class,I think `initialize` can be made much simpler. it just needs to `conf.setBoolean(CONFIG_VIEWFS_ENABLE_INNER_CACHE, true)` and call  `super.initialize(theUri, conf)`. Is that correct? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623688704


   Above build report seems to be due to build machine resources issues:
   mvn install
   ......
   [INFO] ------------------------------------------------------------------------
   [ERROR] unable to create new native thread -> [Help 1]
   [ERROR] 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419058454



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme file
+ * system. The objective here is to handle multiple mounted file systems
+ * transparently. Mount link configurations and in-memory mount table
+ * building behaviors are inherited from ViewFileSystem. Unlike ViewFileSystem
+ * scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * Example 1:
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *      s3a://bucket1/backup/data.zip
+ *
+ * Example 2:
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *      s3a://bucketA/salesDB/dbfile
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {

Review comment:
       This we discussed. Sanjay suggested to keep ViewFsOverloadScheme to indicate that most of the implementation is ViewFS itself but we get to overload the scheme. ViewOverloadSchemeFilesystem and ViewFsOverloadScheme are discussed names while writing the doc. :-)
   
   Yes, we have that FileContext based implementation in consideration. I will be filing a JIRA for that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418743530



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {
+      Assert.assertEquals(testString, lViewIs.readUTF());
+    } finally {
+      lViewIs.close();
+    }
+  }
+
+  /**
+   * Tests create file and delete file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsCreateAndDelete() throws Exception {
+    LOG.info("Starting testLocalFsCreateAndDelete");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path testPath = new Path(mountURI.toString() + "/lfsroot/test");
+      lViewFS.create(testPath);
+      Assert.assertTrue(lViewFS.exists(testPath));
+      lViewFS.delete(testPath, true);
+      Assert.assertFalse(lViewFS.exists(testPath));
+    } finally {
+      lViewFS.close();
+    }
+  }
+
+  /**
+   * Tests root level file with linkMergeSlash with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsLinkSlashMerge() throws Exception {
+    LOG.info("Starting testLocalFSCreateAndDelete");

Review comment:
       thanks. I will change them in next patch.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623090800


   @virajith Thank you for your time on reviews. This commit should address most of your comments. Additionally I fixed an issue when getTragetFileSystem should use passed uri instead of initialized uri. I have corrected that.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419174943



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************

Review comment:
       #b) added config details. Further more details, we will be covering in user guide.
   #a) first point, yes that ViewFS function. second part, we have overridden getScheme, and that will allow VFSOS to work with any scheme. Any scheme flexibility should come to this class only. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418743053



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,170 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme file
+ * system. The objective here is to handle multiple mounted file systems
+ * transparently. Mount link configurations and in-memory mount table
+ * building behaviors are inherited from ViewFileSystem. Unlike ViewFileSystem
+ * scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * Example 1:
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *      s3a://bucket1/backup/data.zip
+ *
+ * Example 2:
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *      s3a://bucketA/salesDB/dbfile
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {

Review comment:
       Sure I will add logging for this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418742960



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##########
@@ -96,16 +96,49 @@ static AccessControlException readOnlyMountTable(final String operation,
     return readOnlyMountTable(operation, p.toString());
   }
 
+  /**
+   * File system instance getter.
+   */
+  static class FsGetter {
+
+    /**
+     * Gets new file system instance of given uri.
+     */
+    public FileSystem getNewInstance(URI uri, Configuration conf)
+        throws IOException {
+      return FileSystem.newInstance(uri, conf);
+    }
+
+    /**
+     * Gets file system instance of given uri.
+     */
+    public FileSystem get(URI uri, Configuration conf) throws IOException {
+      return FileSystem.get(uri, conf);
+    }
+  }
+
+  /**
+   * Gets file system creator instance.
+   */
+  protected FsGetter fsGetter() {

Review comment:
       Thanks @chliang71  for the review. We have overridden this method in ViewFSOverloadScheme class. In ViewFSOverloadScheme, fsGetter gets its special FsGetter anonymous subclass instance with its own implemented methods. Does this answers your question? 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419176899



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 
+       * resolved again to ViewFsOverloadScheme as fs.<scheme>.impl points to
+       * ViewFsOverloadScheme. So, below method will initialize the
+       * fs.viewfs.overload.scheme.target.<scheme>.impl. Other schemes can
+       * follow fs.newInstance
+       */
+      @Override
+      public FileSystem createFs(URI uri, Configuration conf)
+          throws IOException {
+        if (uri.getScheme().equals(myScheme)) {

Review comment:
       For other schemes which are not matching will be able to get it resolved by FileSystem itself right?
   When target uri is scheme is same as OverloadScheme uri sheme, then only we need to bypass to avoid looping. Other cases, FileSystem#get or FileSystem#newInstance will get the right instance. Am I missing?
   Since your exposed uri scheme occupied with OverloadScheme impl, we need that additional config to get actual impl. This was the idea of that configuration.
   
   >Currently, the override only works for one scheme.  
   
   User is going to use one scheme from out side right per OverloadScheme instance right?
   Because your OverloadScheme init will take one uri to initialize, That will be user initialized uri. Rest all will go as mapping uris. 
   
   example: inited uri is hdfs://xyz:9000
                                       mapping target uris are /target1 -> hdfs://target1:9000, /target2 -> hdfs://target2:9000, /target3 -> s3a://bucket1/target3
   
   Here when getting target fs: we would check if scheme matches with inited. Here hdfs was inited. So, for hdfs scheme, FileSystem#get resolution will always to OverloadSchem impl. To get actual impl we use the FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY to initialize target fs. Other impls should be able to resolved by FileSystem#get.
   Is your use case is for configuring Overload scheme for all fs.<scheme>.impl in same client process? example in same client process, you would configure, fs.hdfs.impl=ViewFSOverloadScheme and fs.s3a.impl=ViewFSOverloadScheme ?
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] virajith commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
virajith commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419042354



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);

Review comment:
       `try (FSDataOutputStream fsDos = lViewFs.create(testPath)) {
   }`

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {

Review comment:
       `try(FSDataInputStream lViewIs = lViewFs.open(testPath)) {}`

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************

Review comment:
       Thanks for adding these Uma. A couple of comments:
   (a) "The objective here is to handle multiple mounted file systems transparently.", "Unlike ViewFileSystem
    scheme (viewfs://), the users would be able to use any scheme."   --> These are not functions of this class. `ViewFileSystem` already does this.
   
   (b) Configuring `fs.SCHEME.impl = ViewFsOverloadScheme` is not explained in these examples. That should also be brought up.

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {
+      Assert.assertEquals(testString, lViewIs.readUTF());
+    } finally {
+      lViewIs.close();
+    }
+  }
+
+  /**
+   * Tests create file and delete file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsCreateAndDelete() throws Exception {
+    LOG.info("Starting testLocalFsCreateAndDelete");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path testPath = new Path(mountURI.toString() + "/lfsroot/test");
+      lViewFS.create(testPath);

Review comment:
       stream is not closed. may be use `createNewFile`?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",

Review comment:
       indentation of the `conf.set` lines is confusing. can you make this better?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {
+      Assert.assertEquals(testString, lViewIs.readUTF());
+    } finally {
+      lViewIs.close();
+    }
+  }
+
+  /**
+   * Tests create file and delete file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsCreateAndDelete() throws Exception {
+    LOG.info("Starting testLocalFsCreateAndDelete");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path testPath = new Path(mountURI.toString() + "/lfsroot/test");
+      lViewFS.create(testPath);
+      Assert.assertTrue(lViewFS.exists(testPath));
+      lViewFS.delete(testPath, true);
+      Assert.assertFalse(lViewFS.exists(testPath));
+    } finally {
+      lViewFS.close();
+    }
+  }
+
+  /**
+   * Tests root level file with linkMergeSlash with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsLinkSlashMerge() throws Exception {
+    LOG.info("Starting testLocalFsLinkSlashMerge");
+    ConfigUtil.addLinkMergeSlash(conf, "mt",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path fileOnRoot = new Path(mountURI.toString() + "/NewFile");
+      lViewFS.create(fileOnRoot);

Review comment:
       same comment here -- use `createNewFile`?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {
+      Assert.assertEquals(testString, lViewIs.readUTF());
+    } finally {
+      lViewIs.close();
+    }
+  }
+
+  /**
+   * Tests create file and delete file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsCreateAndDelete() throws Exception {
+    LOG.info("Starting testLocalFsCreateAndDelete");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path testPath = new Path(mountURI.toString() + "/lfsroot/test");
+      lViewFS.create(testPath);
+      Assert.assertTrue(lViewFS.exists(testPath));
+      lViewFS.delete(testPath, true);
+      Assert.assertFalse(lViewFS.exists(testPath));
+    } finally {
+      lViewFS.close();
+    }
+  }
+
+  /**
+   * Tests root level file with linkMergeSlash with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsLinkSlashMerge() throws Exception {
+    LOG.info("Starting testLocalFsLinkSlashMerge");
+    ConfigUtil.addLinkMergeSlash(conf, "mt",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path fileOnRoot = new Path(mountURI.toString() + "/NewFile");
+      lViewFS.create(fileOnRoot);
+      Assert.assertTrue(lViewFS.exists(fileOnRoot));
+    } finally {
+      lViewFS.close();
+    }
+  }
+
+  /**
+   * Tests with linkMergeSlash and other mounts in ViewFSOverloadScheme.
+   */
+  @Test(expected = IOException.class)
+  public void testLocalFsLinkSlashMergeWithOtherMountLinks() throws Exception {
+    LOG.info("Starting testLocalFsLinkSlashMergeWithOtherMountLinks");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    ConfigUtil.addLinkMergeSlash(conf, "mt",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    FileSystem.get(mountURI, conf);
+    Assert.fail("A merge slash cannot be configured with other mount links.");
+  }
+
+  @After
+  public void tearDown() throws Exception {
+    fsTarget.delete(fileSystemTestHelper.getTestRootPath(fsTarget), true);

Review comment:
       `fsTarget.close()` as well?

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme file
+ * system. The objective here is to handle multiple mounted file systems
+ * transparently. Mount link configurations and in-memory mount table
+ * building behaviors are inherited from ViewFileSystem. Unlike ViewFileSystem
+ * scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * Example 1:
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *      s3a://bucket1/backup/data.zip
+ *
+ * Example 2:
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *      s3a://bucketA/salesDB/dbfile
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {

Review comment:
       I think `ViewOverloadSchemeFilesystem` is a better name. Also do we need an implementation extending `ViewFs` class (which extends `AbstractFileSystem`)?

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);

Review comment:
       Except for these two, can the rest be done `@BeforeClass`?

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeHdfsFileSystemContract.java
##########
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import static org.junit.Assume.assumeTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemContractBaseTest;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.TestHDFSFileSystemContract;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with file system contract tests.
+ */
+public class TestViewFsOverloadSchemeHdfsFileSystemContract
+    extends TestHDFSFileSystemContract {
+
+  private MiniDFSCluster cluster;
+  private String defaultWorkingDirectory;
+
+  @Before
+  public void setUp() throws Exception {
+    final Configuration conf = new HdfsConfiguration();
+    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,
+        FileSystemContractBaseTest.TEST_UMASK);
+    final File basedir = GenericTestUtils.getRandomizedTestDir();
+    cluster = new MiniDFSCluster.Builder(conf, basedir)
+        .numDataNodes(2)
+        .build();
+    defaultWorkingDirectory =
+        "/user/" + UserGroupInformation.getCurrentUser().getShortUserName();
+    conf.set(String.format("fs.%s.impl", "hdfs"),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        "hdfs"),
+        DistributedFileSystem.class.getName());
+    URI defaultFSURI =
+        URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), "/user",
+        defaultFSURI);
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), "/append",
+        defaultFSURI);
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(),
+        "/FileSystemContractBaseTest/",
+        new URI(defaultFSURI.toString() + "/FileSystemContractBaseTest/"));
+    fs = (ViewFsOverloadScheme) FileSystem.get(conf);

Review comment:
       cast not required.

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeHdfsFileSystemContract.java
##########
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import static org.junit.Assume.assumeTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemContractBaseTest;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.TestHDFSFileSystemContract;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with file system contract tests.
+ */
+public class TestViewFsOverloadSchemeHdfsFileSystemContract
+    extends TestHDFSFileSystemContract {
+
+  private MiniDFSCluster cluster;
+  private String defaultWorkingDirectory;
+
+  @Before
+  public void setUp() throws Exception {
+    final Configuration conf = new HdfsConfiguration();
+    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,
+        FileSystemContractBaseTest.TEST_UMASK);
+    final File basedir = GenericTestUtils.getRandomizedTestDir();
+    cluster = new MiniDFSCluster.Builder(conf, basedir)

Review comment:
       do we need the `cluster` created for every test? seems heavy :| 
   
   Also, can we set `FS_DEFAULT_NAME_KEY` explicitly here?

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##########
@@ -0,0 +1,351 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR =
+      PathUtils.getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  @Before
+  public void startCluster() throws IOException {
+    conf = new Configuration();
+    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY,
+        true);
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        HDFS_SCHEME), DistributedFileSystem.class.getName());
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    cluster.waitClusterUp();
+    defaultFSURI =
+        URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+    localTargetDir = new File(TEST_ROOT_DIR, "/root/");
+    Assert.assertEquals(HDFS_SCHEME, defaultFSURI.getScheme()); // hdfs scheme.
+  }
+
+  @After
+  public void tearDown() throws IOException {
+    if (cluster != null) {
+      FileSystem.closeAll();
+      cluster.shutdown();
+    }
+  }
+
+  private void createLinks(boolean needFalbackLink, Path hdfsTargetPath,
+      Path localTragetPath) {
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), HDFS_USER_FOLDER,
+        hdfsTargetPath.toUri());
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), LOCAL_FOLDER,
+        localTragetPath.toUri());
+    if (needFalbackLink) {
+      ConfigUtil.addLinkFallback(conf, defaultFSURI.getAuthority(),
+          hdfsTargetPath.toUri());
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * create file /HDFSUser/testfile should create in hdfs
+   * create file /local/test should create directory in local fs
+   */
+  @Test(timeout = 30000)
+  public void testMountLinkWithLocalAndHDFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    Assert.assertEquals(2, fs.getMountPoints().length);
+
+    // /HDFSUser/testfile
+    Path hdfsFile = new Path(HDFS_USER_FOLDER + "/testfile");
+    // /local/test
+    Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+    fs.create(hdfsFile); // /HDFSUser/testfile
+    fs.mkdirs(localDir); // /local/test
+
+    // Initialize HDFS and test files exist in ls or not
+    DistributedFileSystem dfs = new DistributedFileSystem();
+    dfs.initialize(defaultFSURI, conf);
+    try {
+      Assert.assertTrue(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should be in hdfs.
+      Assert.assertFalse(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should not be in local fs.
+    } finally {
+      dfs.close();
+    }
+
+    RawLocalFileSystem lfs = new RawLocalFileSystem();
+    lfs.initialize(localTragetPath.toUri(), conf);
+    try {
+      Assert.assertFalse(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should not be in hdfs.
+      Assert.assertTrue(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should be in local fs.
+    } finally {
+      lfs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> nonexistent://NonExistent/User/
+   * It should fail to add non existent fs link.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testMountLinkWithNonExistentLink() throws Exception {
+    final String userFolder = "/User";
+    final Path nonExistTargetPath =
+        new Path("nonexistent://NonExistent" + userFolder);
+
+    /**
+     * Below addLink will create following mount points
+     * hdfs://localhost:xxx/User --> nonexistent://NonExistent/User/
+     */
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), userFolder,
+        nonExistTargetPath.toUri());
+    FileSystem.get(conf);
+    Assert.fail("Expected to fail with non existent link");
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus on / should list the mount links.
+   */
+  @Test(timeout = 30000)
+  public void testListStatusOnRootShouldListAllMountLinks() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      FileStatus[] ls = fs.listStatus(new Path("/"));
+      Assert.assertEquals(2, ls.length);
+      String lsPath1 =
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).toString();
+      String lsPath2 =
+          Path.getPathWithoutSchemeAndAuthority(ls[1].getPath()).toString();
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath1) || LOCAL_FOLDER.equals(lsPath1));
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath2) || LOCAL_FOLDER.equals(lsPath2));
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus non mount directory should fail.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testListStatusOnNonMountedPath() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.listStatus(new Path("/nonMount"));
+      Assert.fail("It should fail as no mount link with /nonMount");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows hdfs://localhost:xxx/HDFSUser -->
+   * hdfs://localhost:xxx/HDFSUser/ hdfs://localhost:xxx/local -->
+   * file://TEST_ROOT_DIR/root/ fallback --> hdfs://localhost:xxx/HDFSUser/
+   * Creating file or directory at non root level should succeed with fallback
+   * links.
+   */
+  @Test(timeout = 30000)
+  public void testWithLinkFallBack() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/nonMount/myfile"));
+      FileStatus[] ls = fs.listStatus(new Path("/nonMount"));
+      Assert.assertEquals(1, ls.length);
+      Assert.assertEquals(
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).getName(),
+          "myfile");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It can not find any mount link. ViewFS expects a mount point from root.
+   */
+  @Test(expected = NotInMountpointException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailWhenMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/newFileOnRoot"));
+      Assert.fail("It should fail as root is read only in viewFS.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = AccessControlException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailEvenFallBackMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail(
+          "It should fail as root is read only in viewFS, even when configured"
+              + " with fallback.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = UnsupportedFileSystemException.class, timeout = 30000)
+  public void testInvalidOverloadSchemeTargetFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+    conf = new Configuration();
+    createLinks(true, hdfsTargetPath, localTragetPath);

Review comment:
       calling createLinks isn't even needed here right?

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##########
@@ -0,0 +1,351 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR =
+      PathUtils.getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  @Before
+  public void startCluster() throws IOException {
+    conf = new Configuration();
+    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY,
+        true);
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        HDFS_SCHEME), DistributedFileSystem.class.getName());
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    cluster.waitClusterUp();
+    defaultFSURI =
+        URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+    localTargetDir = new File(TEST_ROOT_DIR, "/root/");
+    Assert.assertEquals(HDFS_SCHEME, defaultFSURI.getScheme()); // hdfs scheme.
+  }
+
+  @After
+  public void tearDown() throws IOException {
+    if (cluster != null) {
+      FileSystem.closeAll();
+      cluster.shutdown();
+    }
+  }
+
+  private void createLinks(boolean needFalbackLink, Path hdfsTargetPath,
+      Path localTragetPath) {
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), HDFS_USER_FOLDER,
+        hdfsTargetPath.toUri());
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), LOCAL_FOLDER,
+        localTragetPath.toUri());
+    if (needFalbackLink) {
+      ConfigUtil.addLinkFallback(conf, defaultFSURI.getAuthority(),
+          hdfsTargetPath.toUri());
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * create file /HDFSUser/testfile should create in hdfs
+   * create file /local/test should create directory in local fs
+   */
+  @Test(timeout = 30000)
+  public void testMountLinkWithLocalAndHDFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    Assert.assertEquals(2, fs.getMountPoints().length);
+
+    // /HDFSUser/testfile
+    Path hdfsFile = new Path(HDFS_USER_FOLDER + "/testfile");
+    // /local/test
+    Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+    fs.create(hdfsFile); // /HDFSUser/testfile
+    fs.mkdirs(localDir); // /local/test
+
+    // Initialize HDFS and test files exist in ls or not
+    DistributedFileSystem dfs = new DistributedFileSystem();
+    dfs.initialize(defaultFSURI, conf);
+    try {
+      Assert.assertTrue(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should be in hdfs.
+      Assert.assertFalse(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should not be in local fs.
+    } finally {
+      dfs.close();
+    }
+
+    RawLocalFileSystem lfs = new RawLocalFileSystem();
+    lfs.initialize(localTragetPath.toUri(), conf);
+    try {
+      Assert.assertFalse(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should not be in hdfs.
+      Assert.assertTrue(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should be in local fs.
+    } finally {
+      lfs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> nonexistent://NonExistent/User/
+   * It should fail to add non existent fs link.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testMountLinkWithNonExistentLink() throws Exception {
+    final String userFolder = "/User";
+    final Path nonExistTargetPath =
+        new Path("nonexistent://NonExistent" + userFolder);
+
+    /**
+     * Below addLink will create following mount points
+     * hdfs://localhost:xxx/User --> nonexistent://NonExistent/User/
+     */
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), userFolder,
+        nonExistTargetPath.toUri());
+    FileSystem.get(conf);
+    Assert.fail("Expected to fail with non existent link");
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus on / should list the mount links.
+   */
+  @Test(timeout = 30000)
+  public void testListStatusOnRootShouldListAllMountLinks() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      FileStatus[] ls = fs.listStatus(new Path("/"));
+      Assert.assertEquals(2, ls.length);
+      String lsPath1 =
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).toString();
+      String lsPath2 =
+          Path.getPathWithoutSchemeAndAuthority(ls[1].getPath()).toString();
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath1) || LOCAL_FOLDER.equals(lsPath1));
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath2) || LOCAL_FOLDER.equals(lsPath2));
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus non mount directory should fail.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testListStatusOnNonMountedPath() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.listStatus(new Path("/nonMount"));
+      Assert.fail("It should fail as no mount link with /nonMount");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows hdfs://localhost:xxx/HDFSUser -->
+   * hdfs://localhost:xxx/HDFSUser/ hdfs://localhost:xxx/local -->
+   * file://TEST_ROOT_DIR/root/ fallback --> hdfs://localhost:xxx/HDFSUser/
+   * Creating file or directory at non root level should succeed with fallback
+   * links.
+   */
+  @Test(timeout = 30000)
+  public void testWithLinkFallBack() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/nonMount/myfile"));
+      FileStatus[] ls = fs.listStatus(new Path("/nonMount"));
+      Assert.assertEquals(1, ls.length);
+      Assert.assertEquals(
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).getName(),
+          "myfile");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It can not find any mount link. ViewFS expects a mount point from root.
+   */
+  @Test(expected = NotInMountpointException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailWhenMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/newFileOnRoot"));
+      Assert.fail("It should fail as root is read only in viewFS.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = AccessControlException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailEvenFallBackMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail(
+          "It should fail as root is read only in viewFS, even when configured"
+              + " with fallback.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = UnsupportedFileSystemException.class, timeout = 30000)
+  public void testInvalidOverloadSchemeTargetFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+    conf = new Configuration();
+    createLinks(true, hdfsTargetPath, localTragetPath);
+    conf.set(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY,
+        defaultFSURI.toString());
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail("OverloadScheme target fs should be valid.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It should be able to create file using ViewFsOverloadScheme.
+   */
+  @Test(timeout = 30000)
+  public void testViewFsOverloadSchemeWhenInnerCacheDisabled()

Review comment:
       Can we add a test that cache actually works?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623676065


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   0m 25s |  root in trunk failed.  |
   | -1 :x: |  compile  |   0m 17s |  root in trunk failed.  |
   | -0 :warning: |  checkstyle  |   0m 20s |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 28s |  hadoop-common in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 23s |  hadoop-hdfs in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   1m 30s |  branch has no errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 16s |  hadoop-common in trunk failed.  |
   | +0 :ok: |  spotbugs  |   4m 16s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | -1 :x: |  findbugs  |   0m 43s |  hadoop-common in trunk failed.  |
   | -0 :warning: |  patch  |   4m 27s |  Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 41s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 34s |  the patch passed  |
   | -1 :x: |  compile  |   2m 40s |  root in the patch failed.  |
   | -1 :x: |  javac  |   2m 40s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   3m 25s |  root: The patch generated 130 new + 0 unchanged - 0 fixed = 130 total (was 0)  |
   | -1 :x: |  mvnsite  |   0m 29s |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 28s |  hadoop-hdfs in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | -1 :x: |  shadedclient  |   0m 31s |  patch has errors when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 15s |  hadoop-common in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 23s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 29s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 52s |  hadoop-hdfs in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 16s |  hadoop-common in the patch failed.  |
   | -1 :x: |  unit  |   0m 25s |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 31s |  The patch generated 1 ASF License warnings.  |
   |  |   |  26m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 36717d18b60a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ebb878bab99 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/branch-mvninstall-root.txt |
   | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/branch-compile-root.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/buildtool-branch-checkstyle-root.txt |
   | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt |
   | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/branch-javadoc-hadoop-common-project_hadoop-common.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt |
   | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-compile-root.txt |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-compile-root.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/diff-checkstyle-root.txt |
   | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt |
   | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt |
   | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt |
   | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/testReport/ |
   | asflicense | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/artifact/out/patch-asflicense-problems.txt |
   | Max. process+thread count | 93 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/6/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623785215


   Thanks a lot, @virajith and @chliang71 for reviews.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419067735



##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);

Review comment:
       Some tests has slashmerge config which will not work if there are other mount links in config. I think its ok to keep config inits in @Before ( local fs is not heavy)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] chliang71 commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
chliang71 commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419659179



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##########
@@ -96,16 +96,49 @@ static AccessControlException readOnlyMountTable(final String operation,
     return readOnlyMountTable(operation, p.toString());
   }
 
+  /**
+   * File system instance getter.
+   */
+  static class FsGetter {
+
+    /**
+     * Gets new file system instance of given uri.
+     */
+    public FileSystem getNewInstance(URI uri, Configuration conf)
+        throws IOException {
+      return FileSystem.newInstance(uri, conf);
+    }
+
+    /**
+     * Gets file system instance of given uri.
+     */
+    public FileSystem get(URI uri, Configuration conf) throws IOException {
+      return FileSystem.get(uri, conf);
+    }
+  }
+
+  /**
+   * Gets file system creator instance.
+   */
+  protected FsGetter fsGetter() {

Review comment:
       I see, thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-622103167


   Just a note: I have not verified nfly mount links in this patch. I will file another JIRA for nfly changes and tests.
   Nfly using FileSystem.get, so I should make nfly also use FsCreator to handle looping.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418461809



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 
+       * resolved again to ViewFsOverloadScheme as fs.<scheme>.impl points to
+       * ViewFsOverloadScheme. So, below method will initialize the
+       * fs.viewfs.overload.scheme.target.<scheme>.impl. Other schemes can
+       * follow fs.newInstance
+       */
+      @Override
+      public FileSystem createFs(URI uri, Configuration conf)
+          throws IOException {
+        if (uri.getScheme().equals(myScheme)) {

Review comment:
       Yes, addressed this. Thanks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-622298055


   Thank you @virajith for reviewing the PR. I have pushed another commit by addressing all of your comments. Refactored a bit about initialization part which fixes most of the comments.
   Fixed the check style and whitespace issues as well.
   Please take a look.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419058454



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,187 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme file
+ * system. The objective here is to handle multiple mounted file systems
+ * transparently. Mount link configurations and in-memory mount table
+ * building behaviors are inherited from ViewFileSystem. Unlike ViewFileSystem
+ * scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * Example 1:
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *      s3a://bucket1/backup/data.zip
+ *
+ * Example 2:
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *      s3a://bucketA/salesDB/dbfile
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {

Review comment:
       This we discussed. Sanjay suggested to keep ViewFsOverloadScheme to indicate that most of the implementation is ViewFS itself but we get to overload the scheme. I can this if there are no objections anyway. ViewOverloadSchemeFilesystem and ViewFsOverloadScheme are discussed names while writing the doc. :-)
   
   Yes, we have that FileContext based implementation in consideration. I will be filing a JIRA for that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419179752



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeHdfsFileSystemContract.java
##########
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import static org.junit.Assume.assumeTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemContractBaseTest;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.TestHDFSFileSystemContract;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with file system contract tests.
+ */
+public class TestViewFsOverloadSchemeHdfsFileSystemContract
+    extends TestHDFSFileSystemContract {
+
+  private MiniDFSCluster cluster;
+  private String defaultWorkingDirectory;
+
+  @Before
+  public void setUp() throws Exception {
+    final Configuration conf = new HdfsConfiguration();
+    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,
+        FileSystemContractBaseTest.TEST_UMASK);
+    final File basedir = GenericTestUtils.getRandomizedTestDir();
+    cluster = new MiniDFSCluster.Builder(conf, basedir)

Review comment:
       I tried to moved cluster init into Before class. Contract tests are passing. Please check.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419179610



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##########
@@ -0,0 +1,351 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR =
+      PathUtils.getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  @Before
+  public void startCluster() throws IOException {
+    conf = new Configuration();
+    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY,
+        true);
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        HDFS_SCHEME), DistributedFileSystem.class.getName());
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    cluster.waitClusterUp();
+    defaultFSURI =
+        URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+    localTargetDir = new File(TEST_ROOT_DIR, "/root/");
+    Assert.assertEquals(HDFS_SCHEME, defaultFSURI.getScheme()); // hdfs scheme.
+  }
+
+  @After
+  public void tearDown() throws IOException {
+    if (cluster != null) {
+      FileSystem.closeAll();
+      cluster.shutdown();
+    }
+  }
+
+  private void createLinks(boolean needFalbackLink, Path hdfsTargetPath,
+      Path localTragetPath) {
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), HDFS_USER_FOLDER,
+        hdfsTargetPath.toUri());
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), LOCAL_FOLDER,
+        localTragetPath.toUri());
+    if (needFalbackLink) {
+      ConfigUtil.addLinkFallback(conf, defaultFSURI.getAuthority(),
+          hdfsTargetPath.toUri());
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * create file /HDFSUser/testfile should create in hdfs
+   * create file /local/test should create directory in local fs
+   */
+  @Test(timeout = 30000)
+  public void testMountLinkWithLocalAndHDFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    Assert.assertEquals(2, fs.getMountPoints().length);
+
+    // /HDFSUser/testfile
+    Path hdfsFile = new Path(HDFS_USER_FOLDER + "/testfile");
+    // /local/test
+    Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+    fs.create(hdfsFile); // /HDFSUser/testfile
+    fs.mkdirs(localDir); // /local/test
+
+    // Initialize HDFS and test files exist in ls or not
+    DistributedFileSystem dfs = new DistributedFileSystem();
+    dfs.initialize(defaultFSURI, conf);
+    try {
+      Assert.assertTrue(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should be in hdfs.
+      Assert.assertFalse(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should not be in local fs.
+    } finally {
+      dfs.close();
+    }
+
+    RawLocalFileSystem lfs = new RawLocalFileSystem();
+    lfs.initialize(localTragetPath.toUri(), conf);
+    try {
+      Assert.assertFalse(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should not be in hdfs.
+      Assert.assertTrue(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should be in local fs.
+    } finally {
+      lfs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> nonexistent://NonExistent/User/
+   * It should fail to add non existent fs link.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testMountLinkWithNonExistentLink() throws Exception {
+    final String userFolder = "/User";
+    final Path nonExistTargetPath =
+        new Path("nonexistent://NonExistent" + userFolder);
+
+    /**
+     * Below addLink will create following mount points
+     * hdfs://localhost:xxx/User --> nonexistent://NonExistent/User/
+     */
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), userFolder,
+        nonExistTargetPath.toUri());
+    FileSystem.get(conf);
+    Assert.fail("Expected to fail with non existent link");
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus on / should list the mount links.
+   */
+  @Test(timeout = 30000)
+  public void testListStatusOnRootShouldListAllMountLinks() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      FileStatus[] ls = fs.listStatus(new Path("/"));
+      Assert.assertEquals(2, ls.length);
+      String lsPath1 =
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).toString();
+      String lsPath2 =
+          Path.getPathWithoutSchemeAndAuthority(ls[1].getPath()).toString();
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath1) || LOCAL_FOLDER.equals(lsPath1));
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath2) || LOCAL_FOLDER.equals(lsPath2));
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus non mount directory should fail.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testListStatusOnNonMountedPath() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.listStatus(new Path("/nonMount"));
+      Assert.fail("It should fail as no mount link with /nonMount");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows hdfs://localhost:xxx/HDFSUser -->
+   * hdfs://localhost:xxx/HDFSUser/ hdfs://localhost:xxx/local -->
+   * file://TEST_ROOT_DIR/root/ fallback --> hdfs://localhost:xxx/HDFSUser/
+   * Creating file or directory at non root level should succeed with fallback
+   * links.
+   */
+  @Test(timeout = 30000)
+  public void testWithLinkFallBack() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/nonMount/myfile"));
+      FileStatus[] ls = fs.listStatus(new Path("/nonMount"));
+      Assert.assertEquals(1, ls.length);
+      Assert.assertEquals(
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).getName(),
+          "myfile");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It can not find any mount link. ViewFS expects a mount point from root.
+   */
+  @Test(expected = NotInMountpointException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailWhenMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/newFileOnRoot"));
+      Assert.fail("It should fail as root is read only in viewFS.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = AccessControlException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailEvenFallBackMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail(
+          "It should fail as root is read only in viewFS, even when configured"
+              + " with fallback.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = UnsupportedFileSystemException.class, timeout = 30000)
+  public void testInvalidOverloadSchemeTargetFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+    conf = new Configuration();
+    createLinks(true, hdfsTargetPath, localTragetPath);
+    conf.set(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY,
+        defaultFSURI.toString());
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail("OverloadScheme target fs should be valid.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It should be able to create file using ViewFsOverloadScheme.
+   */
+  @Test(timeout = 30000)
+  public void testViewFsOverloadSchemeWhenInnerCacheDisabled()

Review comment:
       Added Tests.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418462355



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)

Review comment:
       Thanks. Refactored and please check if it is addressing this comment.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419069751



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeHdfsFileSystemContract.java
##########
@@ -0,0 +1,122 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import static org.junit.Assume.assumeTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemContractBaseTest;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.AppendTestUtil;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.hdfs.TestHDFSFileSystemContract;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Ignore;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with file system contract tests.
+ */
+public class TestViewFsOverloadSchemeHdfsFileSystemContract
+    extends TestHDFSFileSystemContract {
+
+  private MiniDFSCluster cluster;
+  private String defaultWorkingDirectory;
+
+  @Before
+  public void setUp() throws Exception {
+    final Configuration conf = new HdfsConfiguration();
+    conf.set(CommonConfigurationKeys.FS_PERMISSIONS_UMASK_KEY,
+        FileSystemContractBaseTest.TEST_UMASK);
+    final File basedir = GenericTestUtils.getRandomizedTestDir();
+    cluster = new MiniDFSCluster.Builder(conf, basedir)

Review comment:
       Let me check that if I can start cluster once and use. We reused existing HDFSContract test.
   If existing contract tests passes with one time cluster start, I should be good to do that. I will try that.
   Why do we need to set  FS_DEFAULT_NAME_KEY explicitly? Minicluster will do that for is we use that same config.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] virajith commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
virajith commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419045604



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 
+       * resolved again to ViewFsOverloadScheme as fs.<scheme>.impl points to
+       * ViewFsOverloadScheme. So, below method will initialize the
+       * fs.viewfs.overload.scheme.target.<scheme>.impl. Other schemes can
+       * follow fs.newInstance
+       */
+      @Override
+      public FileSystem createFs(URI uri, Configuration conf)
+          throws IOException {
+        if (uri.getScheme().equals(myScheme)) {

Review comment:
       Don't think this is addressed. I am saying `createFileSystem` should be called here for every scheme for which the appropriate FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY is set and only those where `uri.getScheme().equals(myScheme)` . Wdyt?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-622656195


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |  22m 15s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 48s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  18m 49s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  0s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 42s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 57s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  5s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  5s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 11s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 25s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 25s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  4s |  root: The patch generated 1 new + 130 unchanged - 1 fixed = 131 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   2m 58s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  patch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 25s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 13s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  94m 32s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  The patch does not generate ASF License warnings.  |
   |  |   | 241m 21s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
   |   | hadoop.hdfs.TestDeadNodeDetection |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.TestRefreshCallQueue |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8e953de40c64 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 82343790eeb |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/3/artifact/out/diff-checkstyle-root.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/3/testReport/ |
   | Max. process+thread count | 4209 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-621756333


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  4s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 12s |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 18s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  4s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m  5s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 10s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  0s |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |  16m 18s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 41s |  root: The patch generated 13 new + 130 unchanged - 1 fixed = 143 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 10 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  patch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 12s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  |  93m 11s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate ASF License warnings.  |
   |  |   | 220m 21s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.TestRefreshCallQueue |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 35e0641a50f8 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5b45c53a4e |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/diff-checkstyle-root.txt |
   | whitespace | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/whitespace-eol.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/testReport/ |
   | Max. process+thread count | 5007 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418461407



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);

Review comment:
       same as above. Removed this now and that should address your comment. Please let me know if that not the case.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] chliang71 commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
chliang71 commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418732158



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##########
@@ -96,16 +96,49 @@ static AccessControlException readOnlyMountTable(final String operation,
     return readOnlyMountTable(operation, p.toString());
   }
 
+  /**
+   * File system instance getter.
+   */
+  static class FsGetter {
+
+    /**
+     * Gets new file system instance of given uri.
+     */
+    public FileSystem getNewInstance(URI uri, Configuration conf)
+        throws IOException {
+      return FileSystem.newInstance(uri, conf);
+    }
+
+    /**
+     * Gets file system instance of given uri.
+     */
+    public FileSystem get(URI uri, Configuration conf) throws IOException {
+      return FileSystem.get(uri, conf);
+    }
+  }
+
+  /**
+   * Gets file system creator instance.
+   */
+  protected FsGetter fsGetter() {

Review comment:
       why do need this method while we can just call new FsGetter() directly?

##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,170 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme file
+ * system. The objective here is to handle multiple mounted file systems
+ * transparently. Mount link configurations and in-memory mount table
+ * building behaviors are inherited from ViewFileSystem. Unlike ViewFileSystem
+ * scheme (viewfs://), the users would be able to use any scheme.
+ *
+ * Example 1:
+ * If users want some of their existing cluster (hdfs://Cluster)
+ * data to mount with other hdfs and object store clusters(hdfs://NN1,
+ * o3fs://bucket1.volume1/, s3a://bucket1/)
+ *
+ * fs.viewfs.mounttable.Cluster./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.Cluster./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.Cluster./backup = s3a://bucket1/backup/
+ *
+ * Op1: Create file hdfs://Cluster/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file hdfs://Cluster/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file hdfs://Cluster/backup/data.zip will go to
+ *      s3a://bucket1/backup/data.zip
+ *
+ * Example 2:
+ * If users want some of their existing cluster (s3a://bucketA/)
+ * data to mount with other hdfs and object store clusters
+ * (hdfs://NN1, o3fs://bucket1.volume1/)
+ *
+ * fs.viewfs.mounttable.bucketA./user = hdfs://NN1/user
+ * fs.viewfs.mounttable.bucketA./data = o3fs://bucket1.volume1/data
+ * fs.viewfs.mounttable.bucketA./salesDB = s3a://bucketA/salesDB/
+ *
+ * Op1: Create file s3a://bucketA/user/fileA will go to hdfs://NN1/user/fileA
+ * Op2: Create file s3a://bucketA/data/datafile will go to
+ *      o3fs://bucket1.volume1/data/datafile
+ * Op3: Create file s3a://bucketA/salesDB/dbfile will go to
+ *      s3a://bucketA/salesDB/dbfile
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {

Review comment:
       a general comment is it would be good to have some logging in this class (DEBUG level)

##########
File path: hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeLocalFileSystem.java
##########
@@ -0,0 +1,158 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FileSystemTestHelper;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.LocalFileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ *
+ * Test the TestViewFsOverloadSchemeLocalFS using a file with authority:
+ * file://mountTableName/ i.e, the authority is used to load a mount table.
+ */
+public class TestViewFsOverloadSchemeLocalFileSystem {
+  private static final String FILE = "file";
+  private static final Log LOG =
+      LogFactory.getLog(TestViewFsOverloadSchemeLocalFileSystem.class);
+  private FileSystem fsTarget;
+  private Configuration conf;
+  private Path targetTestRoot;
+  private FileSystemTestHelper fileSystemTestHelper;
+
+  @Before
+  public void setUp() throws Exception {
+    conf = new Configuration();
+    conf.set(String.format("fs.%s.impl",
+        FILE),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        FILE),
+        LocalFileSystem.class.getName());
+    fsTarget = new LocalFileSystem();
+    fsTarget.initialize(new URI("file:///"), conf);
+    fileSystemTestHelper = new FileSystemTestHelper();
+    // create the test root on local_fs
+    targetTestRoot = fileSystemTestHelper.getAbsoluteTestRootPath(fsTarget);
+    fsTarget.delete(targetTestRoot, true);
+    fsTarget.mkdirs(targetTestRoot);
+  }
+
+  /**
+   * Tests write file and read file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalTargetLinkWriteSimple() throws IOException {
+    LOG.info("Starting testLocalTargetLinkWriteSimple");
+    final String testString = "Hello Local!...";
+    final Path lfsRoot = new Path("/lfsRoot");
+    ConfigUtil.addLink(conf, lfsRoot.toString(),
+        URI.create(targetTestRoot + "/local"));
+    final FileSystem lViewFs = FileSystem.get(URI.create("file:///"), conf);
+
+    final Path testPath = new Path(lfsRoot, "test.txt");
+    final FSDataOutputStream fsDos = lViewFs.create(testPath);
+    try {
+      fsDos.writeUTF(testString);
+    } finally {
+      fsDos.close();
+    }
+
+    FSDataInputStream lViewIs = lViewFs.open(testPath);
+    try {
+      Assert.assertEquals(testString, lViewIs.readUTF());
+    } finally {
+      lViewIs.close();
+    }
+  }
+
+  /**
+   * Tests create file and delete file with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsCreateAndDelete() throws Exception {
+    LOG.info("Starting testLocalFsCreateAndDelete");
+    ConfigUtil.addLink(conf, "mt", "/lfsroot",
+        URI.create(targetTestRoot + "/wd2"));
+    final URI mountURI = URI.create("file://mt/");
+    final FileSystem lViewFS = FileSystem.get(mountURI, conf);
+    try {
+      Path testPath = new Path(mountURI.toString() + "/lfsroot/test");
+      lViewFS.create(testPath);
+      Assert.assertTrue(lViewFS.exists(testPath));
+      lViewFS.delete(testPath, true);
+      Assert.assertFalse(lViewFS.exists(testPath));
+    } finally {
+      lViewFS.close();
+    }
+  }
+
+  /**
+   * Tests root level file with linkMergeSlash with ViewFSOverloadScheme.
+   */
+  @Test
+  public void testLocalFsLinkSlashMerge() throws Exception {
+    LOG.info("Starting testLocalFSCreateAndDelete");

Review comment:
       shouldn't this message be testLocalFsLinkSlashMerge? similar for the other test below




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418461989



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 

Review comment:
       done.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418461212



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);

Review comment:
       Good point. This config is not required now. Allowing to init by any scheme URI. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623784495


   I looked at it. In my local box mac ".Trash" is reserved. So, I changed the ./Trash to Trash1 and ran several times. It passed. It should be some flakey test. (It has some history in failing ex: HADOOP-8542)
   Since it was passing in several times runs, I did not dig further.
   
   Since recent build was having issues:
   I ran locally build.
   [INFO] Apache Hadoop Cloud Storage ........................ SUCCESS [  0.865 s]
   [INFO] Apache Hadoop Cloud Storage Project ................ SUCCESS [  0.051 s]
   [INFO] ------------------------------------------------------------------------
   [INFO] BUILD SUCCESS
   [INFO] ------------------------------------------------------------------------
   [INFO] Total time:  22:14 min
   [INFO] Finished at: 2020-05-04T16:50:01-07:00
   [INFO] ------------------------------------------------------------------------
   
   
   [INFO] -------------------------------------------------------
   [INFO]  T E S T S
   [INFO] -------------------------------------------------------
   [INFO] Running org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeLocalFileSystem
   [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.48 s - in org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeLocalFileSystem
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
   
   [INFO] 
   [INFO] -------------------------------------------------------
   [INFO]  T E S T S
   [INFO] -------------------------------------------------------
   [INFO] Running org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract
   [INFO] Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.732 s - in org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeHdfsFileSystemContract
   [INFO] Running org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme
   [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.184 s - in org.apache.hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0
   
   [INFO] 
   [INFO] -------------------------------------------------------
   [INFO]  T E S T S
   [INFO] -------------------------------------------------------
   [INFO] Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.412 s - in org.apache.hadoop.fs.viewfs.TestViewFsTrash
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418460626



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##########
@@ -302,6 +320,11 @@ protected FileSystem getTargetFileSystem(final String settings,
     }
   }
 
+  protected void superFSInit(final URI theUri, final Configuration conf)

Review comment:
       Removed this method now. After refactoring initialization part, this got addressed along with that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418460857



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************

Review comment:
       I added examples please let me know if they are make sense.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418460714



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple

Review comment:
       Thanks corrected them.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-622375625


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   2m 22s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  7s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  2s |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m  2s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 14s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 50s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 34s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 18s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 18s |  the patch passed  |
   | -1 :x: |  compile  |  22m 30s |  root in the patch failed.  |
   | -1 :x: |  javac  |  22m 30s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   3m 55s |  root: The patch generated 1 new + 130 unchanged - 1 fixed = 131 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m  7s |  patch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   7m 10s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 14s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 118m 45s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  The patch does not generate ASF License warnings.  |
   |  |   | 276m 28s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.TestRefreshCallQueue |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 05ee4f0f234f 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 82343790eeb |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/artifact/out/patch-compile-root.txt |
   | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/artifact/out/patch-compile-root.txt |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/artifact/out/diff-checkstyle-root.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/testReport/ |
   | Max. process+thread count | 2870 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623221963


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |   1m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 59s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 33s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   3m 16s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 11s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 44s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 43s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   6m 14s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  the patch passed  |
   | +1 :green_heart: |  javac  |  20m 43s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m  5s |  root: The patch generated 0 new + 130 unchanged - 1 fixed = 130 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   3m  7s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m  2s |  patch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   6m 50s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 29s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 132m  3s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  The patch does not generate ASF License warnings.  |
   |  |   | 283m 51s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.hdfs.server.namenode.ha.TestAddBlockTailing |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/5/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a80acf9e19bc 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 8dace8ff3a9 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/5/testReport/ |
   | Max. process+thread count | 3007 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/5/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419078620



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##########
@@ -0,0 +1,351 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR =
+      PathUtils.getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  @Before
+  public void startCluster() throws IOException {
+    conf = new Configuration();
+    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY,
+        true);
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        HDFS_SCHEME), DistributedFileSystem.class.getName());
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    cluster.waitClusterUp();
+    defaultFSURI =
+        URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+    localTargetDir = new File(TEST_ROOT_DIR, "/root/");
+    Assert.assertEquals(HDFS_SCHEME, defaultFSURI.getScheme()); // hdfs scheme.
+  }
+
+  @After
+  public void tearDown() throws IOException {
+    if (cluster != null) {
+      FileSystem.closeAll();
+      cluster.shutdown();
+    }
+  }
+
+  private void createLinks(boolean needFalbackLink, Path hdfsTargetPath,
+      Path localTragetPath) {
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), HDFS_USER_FOLDER,
+        hdfsTargetPath.toUri());
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), LOCAL_FOLDER,
+        localTragetPath.toUri());
+    if (needFalbackLink) {
+      ConfigUtil.addLinkFallback(conf, defaultFSURI.getAuthority(),
+          hdfsTargetPath.toUri());
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * create file /HDFSUser/testfile should create in hdfs
+   * create file /local/test should create directory in local fs
+   */
+  @Test(timeout = 30000)
+  public void testMountLinkWithLocalAndHDFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    Assert.assertEquals(2, fs.getMountPoints().length);
+
+    // /HDFSUser/testfile
+    Path hdfsFile = new Path(HDFS_USER_FOLDER + "/testfile");
+    // /local/test
+    Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+    fs.create(hdfsFile); // /HDFSUser/testfile
+    fs.mkdirs(localDir); // /local/test
+
+    // Initialize HDFS and test files exist in ls or not
+    DistributedFileSystem dfs = new DistributedFileSystem();
+    dfs.initialize(defaultFSURI, conf);
+    try {
+      Assert.assertTrue(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should be in hdfs.
+      Assert.assertFalse(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should not be in local fs.
+    } finally {
+      dfs.close();
+    }
+
+    RawLocalFileSystem lfs = new RawLocalFileSystem();
+    lfs.initialize(localTragetPath.toUri(), conf);
+    try {
+      Assert.assertFalse(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should not be in hdfs.
+      Assert.assertTrue(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should be in local fs.
+    } finally {
+      lfs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> nonexistent://NonExistent/User/
+   * It should fail to add non existent fs link.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testMountLinkWithNonExistentLink() throws Exception {
+    final String userFolder = "/User";
+    final Path nonExistTargetPath =
+        new Path("nonexistent://NonExistent" + userFolder);
+
+    /**
+     * Below addLink will create following mount points
+     * hdfs://localhost:xxx/User --> nonexistent://NonExistent/User/
+     */
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), userFolder,
+        nonExistTargetPath.toUri());
+    FileSystem.get(conf);
+    Assert.fail("Expected to fail with non existent link");
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus on / should list the mount links.
+   */
+  @Test(timeout = 30000)
+  public void testListStatusOnRootShouldListAllMountLinks() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      FileStatus[] ls = fs.listStatus(new Path("/"));
+      Assert.assertEquals(2, ls.length);
+      String lsPath1 =
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).toString();
+      String lsPath2 =
+          Path.getPathWithoutSchemeAndAuthority(ls[1].getPath()).toString();
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath1) || LOCAL_FOLDER.equals(lsPath1));
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath2) || LOCAL_FOLDER.equals(lsPath2));
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus non mount directory should fail.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testListStatusOnNonMountedPath() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.listStatus(new Path("/nonMount"));
+      Assert.fail("It should fail as no mount link with /nonMount");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows hdfs://localhost:xxx/HDFSUser -->
+   * hdfs://localhost:xxx/HDFSUser/ hdfs://localhost:xxx/local -->
+   * file://TEST_ROOT_DIR/root/ fallback --> hdfs://localhost:xxx/HDFSUser/
+   * Creating file or directory at non root level should succeed with fallback
+   * links.
+   */
+  @Test(timeout = 30000)
+  public void testWithLinkFallBack() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/nonMount/myfile"));
+      FileStatus[] ls = fs.listStatus(new Path("/nonMount"));
+      Assert.assertEquals(1, ls.length);
+      Assert.assertEquals(
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).getName(),
+          "myfile");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It can not find any mount link. ViewFS expects a mount point from root.
+   */
+  @Test(expected = NotInMountpointException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailWhenMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/newFileOnRoot"));
+      Assert.fail("It should fail as root is read only in viewFS.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = AccessControlException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailEvenFallBackMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail(
+          "It should fail as root is read only in viewFS, even when configured"
+              + " with fallback.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = UnsupportedFileSystemException.class, timeout = 30000)
+  public void testInvalidOverloadSchemeTargetFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+    conf = new Configuration();
+    createLinks(true, hdfsTargetPath, localTragetPath);

Review comment:
       Without creating links ViewFS fails with "not mount points configured". So, to move forward to the intended scenario, we just need to create links.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418461519



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system

Review comment:
       corrected. Thanks




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-623128231


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |:----:|----------:|--------:|:--------|
   | +0 :ok: |  reexec  |  26m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 5 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 52s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 54s |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 10s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 48s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 22s |  branch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   3m 12s |  Used deprecated FindBugs config; considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 19s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |  22m  5s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   3m  9s |  root: The patch generated 17 new + 130 unchanged - 1 fixed = 147 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 44s |  patch has no errors when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 34s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   9m 30s |  hadoop-common in the patch passed.  |
   | -1 :x: |  unit  | 124m 32s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  The patch does not generate ASF License warnings.  |
   |  |   | 288m 36s |   |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.fs.viewfs.TestViewFsTrash |
   |   | hadoop.io.compress.TestCompressorDecompressor |
   |   | hadoop.io.compress.snappy.TestSnappyCompressorDecompressor |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/4/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1988 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 697ebdd366f4 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 44de193bec3 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/4/artifact/out/diff-checkstyle-root.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt |
   | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt |
   |  Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/4/testReport/ |
   | Max. process+thread count | 3123 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . |
   | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1988/4/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#issuecomment-622593781


   @chliang71 Thank you for the reviews. I have pushed a PR by addressing your comments.
   Please take a look if it addresses your concerns.
   
   -Thank you!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419079123



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFsOverloadSchemeWithHdfsScheme.java
##########
@@ -0,0 +1,351 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.RawLocalFileSystem;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.DistributedFileSystem;
+import org.apache.hadoop.hdfs.MiniDFSCluster;
+import org.apache.hadoop.security.AccessControlException;
+import org.apache.hadoop.test.PathUtils;
+import org.junit.After;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * Tests ViewFsOverloadScheme with configured mount links.
+ */
+public class TestViewFsOverloadSchemeWithHdfsScheme {
+  private static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";
+  private static final String HDFS_SCHEME = "hdfs";
+  private Configuration conf = null;
+  private MiniDFSCluster cluster = null;
+  private URI defaultFSURI;
+  private File localTargetDir;
+  private static final String TEST_ROOT_DIR =
+      PathUtils.getTestDirName(TestViewFsOverloadSchemeWithHdfsScheme.class);
+  private static final String HDFS_USER_FOLDER = "/HDFSUser";
+  private static final String LOCAL_FOLDER = "/local";
+
+  @Before
+  public void startCluster() throws IOException {
+    conf = new Configuration();
+    conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_DELEGATION_TOKEN_ALWAYS_USE_KEY,
+        true);
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+    conf.set(String.format(
+        FsConstants.FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN,
+        HDFS_SCHEME), DistributedFileSystem.class.getName());
+    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(2).build();
+    cluster.waitClusterUp();
+    defaultFSURI =
+        URI.create(conf.get(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY));
+    localTargetDir = new File(TEST_ROOT_DIR, "/root/");
+    Assert.assertEquals(HDFS_SCHEME, defaultFSURI.getScheme()); // hdfs scheme.
+  }
+
+  @After
+  public void tearDown() throws IOException {
+    if (cluster != null) {
+      FileSystem.closeAll();
+      cluster.shutdown();
+    }
+  }
+
+  private void createLinks(boolean needFalbackLink, Path hdfsTargetPath,
+      Path localTragetPath) {
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), HDFS_USER_FOLDER,
+        hdfsTargetPath.toUri());
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), LOCAL_FOLDER,
+        localTragetPath.toUri());
+    if (needFalbackLink) {
+      ConfigUtil.addLinkFallback(conf, defaultFSURI.getAuthority(),
+          hdfsTargetPath.toUri());
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * create file /HDFSUser/testfile should create in hdfs
+   * create file /local/test should create directory in local fs
+   */
+  @Test(timeout = 30000)
+  public void testMountLinkWithLocalAndHDFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    Assert.assertEquals(2, fs.getMountPoints().length);
+
+    // /HDFSUser/testfile
+    Path hdfsFile = new Path(HDFS_USER_FOLDER + "/testfile");
+    // /local/test
+    Path localDir = new Path(LOCAL_FOLDER + "/test");
+
+    fs.create(hdfsFile); // /HDFSUser/testfile
+    fs.mkdirs(localDir); // /local/test
+
+    // Initialize HDFS and test files exist in ls or not
+    DistributedFileSystem dfs = new DistributedFileSystem();
+    dfs.initialize(defaultFSURI, conf);
+    try {
+      Assert.assertTrue(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should be in hdfs.
+      Assert.assertFalse(dfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should not be in local fs.
+    } finally {
+      dfs.close();
+    }
+
+    RawLocalFileSystem lfs = new RawLocalFileSystem();
+    lfs.initialize(localTragetPath.toUri(), conf);
+    try {
+      Assert.assertFalse(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(hdfsTargetPath),
+              hdfsFile.getName()))); // should not be in hdfs.
+      Assert.assertTrue(lfs.exists(
+          new Path(Path.getPathWithoutSchemeAndAuthority(localTragetPath),
+              localDir.getName()))); // should be in local fs.
+    } finally {
+      lfs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> nonexistent://NonExistent/User/
+   * It should fail to add non existent fs link.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testMountLinkWithNonExistentLink() throws Exception {
+    final String userFolder = "/User";
+    final Path nonExistTargetPath =
+        new Path("nonexistent://NonExistent" + userFolder);
+
+    /**
+     * Below addLink will create following mount points
+     * hdfs://localhost:xxx/User --> nonexistent://NonExistent/User/
+     */
+    ConfigUtil.addLink(conf, defaultFSURI.getAuthority(), userFolder,
+        nonExistTargetPath.toUri());
+    FileSystem.get(conf);
+    Assert.fail("Expected to fail with non existent link");
+  }
+
+  /**
+   * Create mount links as follows.
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus on / should list the mount links.
+   */
+  @Test(timeout = 30000)
+  public void testListStatusOnRootShouldListAllMountLinks() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      FileStatus[] ls = fs.listStatus(new Path("/"));
+      Assert.assertEquals(2, ls.length);
+      String lsPath1 =
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).toString();
+      String lsPath2 =
+          Path.getPathWithoutSchemeAndAuthority(ls[1].getPath()).toString();
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath1) || LOCAL_FOLDER.equals(lsPath1));
+      Assert.assertTrue(
+          HDFS_USER_FOLDER.equals(lsPath2) || LOCAL_FOLDER.equals(lsPath2));
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * ListStatus non mount directory should fail.
+   */
+  @Test(expected = IOException.class, timeout = 30000)
+  public void testListStatusOnNonMountedPath() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.listStatus(new Path("/nonMount"));
+      Assert.fail("It should fail as no mount link with /nonMount");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows hdfs://localhost:xxx/HDFSUser -->
+   * hdfs://localhost:xxx/HDFSUser/ hdfs://localhost:xxx/local -->
+   * file://TEST_ROOT_DIR/root/ fallback --> hdfs://localhost:xxx/HDFSUser/
+   * Creating file or directory at non root level should succeed with fallback
+   * links.
+   */
+  @Test(timeout = 30000)
+  public void testWithLinkFallBack() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/nonMount/myfile"));
+      FileStatus[] ls = fs.listStatus(new Path("/nonMount"));
+      Assert.assertEquals(1, ls.length);
+      Assert.assertEquals(
+          Path.getPathWithoutSchemeAndAuthority(ls[0].getPath()).getName(),
+          "myfile");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It can not find any mount link. ViewFS expects a mount point from root.
+   */
+  @Test(expected = NotInMountpointException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailWhenMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(false, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/newFileOnRoot"));
+      Assert.fail("It should fail as root is read only in viewFS.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = AccessControlException.class, timeout = 30000)
+  public void testCreateOnRootShouldFailEvenFallBackMountLinkConfigured()
+      throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+
+    createLinks(true, hdfsTargetPath, localTragetPath);
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail(
+          "It should fail as root is read only in viewFS, even when configured"
+              + " with fallback.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   * fallback --> hdfs://localhost:xxx/HDFSUser/
+   *
+   * It will find fallback link, but root is not accessible and read only.
+   */
+  @Test(expected = UnsupportedFileSystemException.class, timeout = 30000)
+  public void testInvalidOverloadSchemeTargetFS() throws Exception {
+    final Path hdfsTargetPath = new Path(defaultFSURI + HDFS_USER_FOLDER);
+    final Path localTragetPath = new Path(localTargetDir.toURI());
+    conf = new Configuration();
+    createLinks(true, hdfsTargetPath, localTragetPath);
+    conf.set(CommonConfigurationKeys.FS_DEFAULT_NAME_KEY,
+        defaultFSURI.toString());
+    conf.set(String.format(FS_IMPL_PATTERN_KEY, HDFS_SCHEME),
+        ViewFsOverloadScheme.class.getName());
+
+    ViewFsOverloadScheme fs = (ViewFsOverloadScheme) FileSystem.get(conf);
+    try {
+      fs.create(new Path("/onRootWhenFallBack"));
+      Assert.fail("OverloadScheme target fs should be valid.");
+    } finally {
+      fs.close();
+    }
+  }
+
+  /**
+   * Create mount links as follows
+   * hdfs://localhost:xxx/HDFSUser --> hdfs://localhost:xxx/HDFSUser/
+   * hdfs://localhost:xxx/local --> file://TEST_ROOT_DIR/root/
+   *
+   * It should be able to create file using ViewFsOverloadScheme.
+   */
+  @Test(timeout = 30000)
+  public void testViewFsOverloadSchemeWhenInnerCacheDisabled()

Review comment:
       We have not changed cache behavior itself. When cache disabled, we have varied path. 
   Anyway Its should be easy to add a test for that I think. Let me add one.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r419176899



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFsOverloadScheme.java
##########
@@ -0,0 +1,175 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.viewfs;
+
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.FsConstants;
+import org.apache.hadoop.fs.UnsupportedFileSystemException;
+
+/******************************************************************************
+ * This class is extended from the ViewFileSystem for the overloaded scheme 
+ * file system. This object is the way end-user code interacts with a multiple
+ * mounted file systems transparently. Overloaded scheme based uri can be
+ * continued to use as end user interactive uri and mount links can be
+ * configured to any Hadoop compatible file system. This class maintains all 
+ * the target file system instances and delegates the calls to respective 
+ * target file system based on mount link mapping. Mount link configuration
+ * format and behavior is same as ViewFileSystem.
+ *****************************************************************************/
+@InterfaceAudience.LimitedPrivate({ "MapReduce", "HBase", "Hive" })
+@InterfaceStability.Evolving
+public class ViewFsOverloadScheme extends ViewFileSystem {
+
+  public ViewFsOverloadScheme() throws IOException {
+    super();
+  }
+
+  private FsCreator fsCreator;
+  private String myScheme;
+
+  @Override
+  public String getScheme() {
+    return myScheme;
+  }
+
+  @Override
+  public void initialize(final URI theUri, final Configuration conf)
+      throws IOException {
+    superFSInit(theUri, conf);
+    setConf(conf);
+    config = conf;
+    myScheme = config.get(FsConstants.VIEWFS_OVERLOAD_SCHEME_KEY);
+    fsCreator = new FsCreator() {
+
+      /**
+       * This method is overridden because in ViewFsOverloadScheme if
+       * overloaded scheme matches with mounted target fs scheme, file system
+       * should be created without going into fs.<scheme>.impl based 
+       * resolution. Otherwise it will end up into loop as target will be 
+       * resolved again to ViewFsOverloadScheme as fs.<scheme>.impl points to
+       * ViewFsOverloadScheme. So, below method will initialize the
+       * fs.viewfs.overload.scheme.target.<scheme>.impl. Other schemes can
+       * follow fs.newInstance
+       */
+      @Override
+      public FileSystem createFs(URI uri, Configuration conf)
+          throws IOException {
+        if (uri.getScheme().equals(myScheme)) {

Review comment:
       For other schemes which are not matching will be able to get it resolved by FileSystem itself right?
   When target uri is scheme is same as OverloadScheme uri sheme, then only we need to bypass to avoid looping. Other cases, FileSystem#get or FileSystem#newInstance will get the right instance. Am I missing?
   Since your exposed uri scheme occupied with OverloadScheme impl, we need that additional config to get actual impl. This was the idea of that configuration.
   
   >Currently, the override only works for one scheme.  
   User is going to use one scheme from out side right per OverloadScheme instance right?
   Because your OverloadScheme init will take one uri to initialize, That will be user initialized uri. Rest all will go as mapping uris. 
   
   example: inited uri is hdfs://xyz:9000
                                       mapping target uris are /target1 -> hdfs://target1:9000, /target2 -> hdfs://target2:9000, /target3 -> s3a://bucket1/target3
   
   Here when getting target fs: we would check if scheme matches with inited. Here hdfs was inited. So, for hdfs scheme, FileSystem#get resolution will always to OverloadSchem impl. To get actual impl we use the FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY to initialize target fs. Other impls should be able to resolved by FileSystem#get.
   Is your use case is for configuring Overload scheme for all fs.<scheme>.impl in same client process? example in same client process, you would configure, fs.hdfs.impl=ViewFSOverloadScheme and fs.s3a.impl=ViewFSOverloadScheme ?
   
   
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] umamaheswararao commented on a change in pull request #1988: HDFS-15305. Extend ViewFS and provide ViewFSOverloadScheme implementation with scheme configurable.

Posted by GitBox <gi...@apache.org>.
umamaheswararao commented on a change in pull request #1988:
URL: https://github.com/apache/hadoop/pull/1988#discussion_r418460429



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FsConstants.java
##########
@@ -42,4 +42,11 @@
    */
   public static final URI VIEWFS_URI = URI.create("viewfs:///");
   public static final String VIEWFS_SCHEME = "viewfs";
+
+  public static final String VIEWFS_OVERLOAD_SCHEME_KEY =
+      "fs.viewfs.overload.scheme";
+  public static final String VIEWFS_OVERLOAD_SCHEME_DEFAULT = "hdfs";
+  public static final String FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN_KEY =
+      "fs.viewfs.overload.scheme.target.%s.impl";
+  public static final String FS_IMPL_PATTERN_KEY = "fs.%s.impl";

Review comment:
       Initial commit was using first two in src, anyway now I removed config of VIEWFS_OVERLOAD_SCHEME_DEFAULT completely. So, only FS_VIEWFS_OVERLOAD_SCHEME_TARGET_FS_IMPL_PATTERN is there now and using this in ViewFsOverloadScheme class.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org