You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by tm...@apache.org on 2019/08/27 21:34:55 UTC

[hadoop] branch branch-3.2 updated (d255efa -> 2d8799f)

This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a change to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git.


    from d255efa  HDFS-14779. Fix logging error in TestEditLog#testMultiStreamsLoadEditWithConfMaxTxns
     new 006ae25  HADOOP-16163. NPE in setup/teardown of ITestAbfsDelegationTokens.
     new dd63612  HADOOP-16269. ABFS: add listFileStatus with StartFrom.
     new a6d50a9  HADOOP-16376. ABFS: Override access() to no-op.
     new ce23e97  HADOOP-16340. ABFS driver continues to retry on IOException responses from REST operations.
     new 3b3c0c4  HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time instead of UTC.
     new 9d722c6  HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)
     new 2d8799f  HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by Robert Kanter.

The 7 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../hadoop-client-check-invariants/pom.xml         |   2 +
 .../hadoop-client-check-test-invariants/pom.xml    |   2 +
 .../hadoop-client-minicluster/pom.xml              |   2 +
 .../hadoop-client-runtime/pom.xml                  |   2 +
 hadoop-common-project/hadoop-common/pom.xml        |   2 +-
 hadoop-common-project/hadoop-kms/pom.xml           |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml     |   2 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml        |   2 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml            |   2 +-
 .../hadoop-mapreduce-client-app/pom.xml            |  20 +++
 .../hadoop-mapreduce-client-jobclient/pom.xml      |   7 +-
 hadoop-project/pom.xml                             |  14 +-
 hadoop-tools/hadoop-azure/pom.xml                  |   2 +
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java    |  23 +++-
 .../fs/azurebfs/AzureBlobFileSystemStore.java      | 109 ++++++++++++++-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   9 ++
 .../fs/azurebfs/oauth2/AzureADAuthenticator.java   |   4 +-
 .../org/apache/hadoop/fs/azurebfs/utils/CRC64.java |  60 ++++++++
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 .../ITestAzureBlobFileSystemFileStatus.java        |  18 +++
 ...zureBlobFileSystemStoreListStatusWithRange.java | 151 +++++++++++++++++++++
 .../apache/hadoop/fs/azurebfs/TestAbfsCrc64.java   |  26 ++--
 .../hadoop-yarn/hadoop-yarn-common/pom.xml         |   2 +-
 .../pom.xml                                        |   2 +-
 .../hadoop-yarn-server-tests/pom.xml               |   2 +-
 .../hadoop-yarn-server-web-proxy/pom.xml           |   8 ++
 26 files changed, 442 insertions(+), 40 deletions(-)
 create mode 100644 hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java
 create mode 100644 hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java
 copy hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestJUnitSetup.java => hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java (59%)


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 07/07: HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by Robert Kanter.

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 2d8799f4bc2297b0414b7f9b30c7e465deaf76d4
Author: Akira Ajisaka <aa...@apache.org>
AuthorDate: Wed Oct 10 10:16:57 2018 +0900

    HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by Robert Kanter.
---
 .../hadoop-client-check-invariants/pom.xml           |  2 ++
 .../hadoop-client-check-test-invariants/pom.xml      |  2 ++
 .../hadoop-client-minicluster/pom.xml                |  2 ++
 hadoop-client-modules/hadoop-client-runtime/pom.xml  |  2 ++
 hadoop-common-project/hadoop-common/pom.xml          |  2 +-
 hadoop-common-project/hadoop-kms/pom.xml             |  2 +-
 hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml       |  2 +-
 hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml          |  2 +-
 hadoop-hdfs-project/hadoop-hdfs/pom.xml              |  2 +-
 .../hadoop-mapreduce-client-app/pom.xml              | 20 ++++++++++++++++++++
 .../hadoop-mapreduce-client-jobclient/pom.xml        |  7 ++++++-
 hadoop-project/pom.xml                               | 12 +++++++++---
 .../hadoop-yarn/hadoop-yarn-common/pom.xml           |  2 +-
 .../pom.xml                                          |  2 +-
 .../hadoop-yarn-server-tests/pom.xml                 |  2 +-
 .../hadoop-yarn-server-web-proxy/pom.xml             |  8 ++++++++
 16 files changed, 59 insertions(+), 12 deletions(-)

diff --git a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
index 89ea837..4c94a69 100644
--- a/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-invariants/pom.xml
@@ -90,6 +90,8 @@
                     <exclude>log4j:log4j</exclude>
                     <!-- Leave javax annotations we need exposed -->
                     <exclude>com.google.code.findbugs:jsr305</exclude>
+                    <!-- Leave bouncycastle unshaded because it's signed with a special Oracle certificate so it can be a custom JCE security provider -->
+                    <exclude>org.bouncycastle:*</exclude>
                   </excludes>
                 </banTransitiveDependencies>
                 <banDuplicateClasses>
diff --git a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
index 99ec36e..586ccee 100644
--- a/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
+++ b/hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml
@@ -98,6 +98,8 @@
                     <exclude> org.hamcrest:hamcrest-core</exclude>
                     <!-- Leave javax annotations we need exposed -->
                     <exclude>com.google.code.findbugs:jsr305</exclude>
+                    <!-- Leave bouncycastle unshaded because it's signed with a special Oracle certificate so it can be a custom JCE security provider -->
+                    <exclude>org.bouncycastle:*</exclude>
                   </excludes>
                 </banTransitiveDependencies>
                 <banDuplicateClasses>
diff --git a/hadoop-client-modules/hadoop-client-minicluster/pom.xml b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
index dcf3da9..964fed0 100644
--- a/hadoop-client-modules/hadoop-client-minicluster/pom.xml
+++ b/hadoop-client-modules/hadoop-client-minicluster/pom.xml
@@ -667,6 +667,8 @@
                       <exclude>com.google.code.findbugs:jsr305</exclude>
                       <exclude>log4j:log4j</exclude>
                       <!-- We need a filter that matches just those things that are included in the above artiacts -->
+                      <!-- Leave bouncycastle unshaded because it's signed with a special Oracle certificate so it can be a custom JCE security provider -->
+                      <exclude>org.bouncycastle:*</exclude>
                     </excludes>
                   </artifactSet>
                   <filters>
diff --git a/hadoop-client-modules/hadoop-client-runtime/pom.xml b/hadoop-client-modules/hadoop-client-runtime/pom.xml
index 80fd3b6..8c2130c 100644
--- a/hadoop-client-modules/hadoop-client-runtime/pom.xml
+++ b/hadoop-client-modules/hadoop-client-runtime/pom.xml
@@ -158,6 +158,8 @@
                       <!-- the jdk ships part of the javax.annotation namespace, so if we want to relocate this we'll have to care it out by class :( -->
                       <exclude>com.google.code.findbugs:jsr305</exclude>
                       <exclude>io.dropwizard.metrics:metrics-core</exclude>
+                      <!-- Leave bouncycastle unshaded because it's signed with a special Oracle certificate so it can be a custom JCE security provider -->
+                      <exclude>org.bouncycastle:*</exclude>
                     </excludes>
                   </artifactSet>
                   <filters>
diff --git a/hadoop-common-project/hadoop-common/pom.xml b/hadoop-common-project/hadoop-common/pom.xml
index e2b096d..369c5d8 100644
--- a/hadoop-common-project/hadoop-common/pom.xml
+++ b/hadoop-common-project/hadoop-common/pom.xml
@@ -298,7 +298,7 @@
     </dependency>
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-common-project/hadoop-kms/pom.xml b/hadoop-common-project/hadoop-kms/pom.xml
index 21ad81d..b7f996a 100644
--- a/hadoop-common-project/hadoop-kms/pom.xml
+++ b/hadoop-common-project/hadoop-kms/pom.xml
@@ -171,7 +171,7 @@
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
index 3379aa4..4223272 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
@@ -204,7 +204,7 @@
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
   </dependencies>
diff --git a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
index 30f4bea..96b7c3c 100644
--- a/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml
@@ -165,7 +165,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     </dependency>
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
   </dependencies>
diff --git a/hadoop-hdfs-project/hadoop-hdfs/pom.xml b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
index 17b1700..8e0c21f 100644
--- a/hadoop-hdfs-project/hadoop-hdfs/pom.xml
+++ b/hadoop-hdfs-project/hadoop-hdfs/pom.xml
@@ -190,7 +190,7 @@ https://maven.apache.org/xsd/maven-4.0.0.xsd">
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
index 532c44f..2b8aff6 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml
@@ -46,6 +46,16 @@
     <dependency>
       <groupId>org.apache.hadoop</groupId>
       <artifactId>hadoop-yarn-server-web-proxy</artifactId>
+      <exclusions>
+        <exclusion>
+          <groupId>org.bouncycastle</groupId>
+          <artifactId>bcprov-jdk15on</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.bouncycastle</groupId>
+          <artifactId>bcpkix-jdk15on</artifactId>
+        </exclusion>
+      </exclusions>
     </dependency>
     <dependency>
       <groupId>org.apache.hadoop</groupId>
@@ -88,6 +98,16 @@
       <groupId>com.fasterxml.jackson.core</groupId>
       <artifactId>jackson-databind</artifactId>
     </dependency>
+    <dependency>
+      <groupId>org.bouncycastle</groupId>
+      <artifactId>bcprov-jdk15on</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.bouncycastle</groupId>
+      <artifactId>bcpkix-jdk15on</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 
   <build>
diff --git a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
index a202c15..c1e5d23 100644
--- a/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
+++ b/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml
@@ -108,7 +108,12 @@
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.bouncycastle</groupId>
+      <artifactId>bcpkix-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
   </dependencies>
diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index b096f93..c5c1a30 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -96,6 +96,8 @@
     <guice.version>4.0</guice.version>
     <joda-time.version>2.9.9</joda-time.version>
 
+    <bouncycastle.version>1.60</bouncycastle.version>
+
     <!-- Required for testing LDAP integration -->
     <apacheds.version>2.0.0-M21</apacheds.version>
     <ldap-api.version>1.0.0-M33</ldap-api.version>
@@ -1296,10 +1298,14 @@
      </dependency>
      <dependency>
        <groupId>org.bouncycastle</groupId>
-       <artifactId>bcprov-jdk16</artifactId>
-       <version>1.46</version>
-       <scope>test</scope>
+       <artifactId>bcprov-jdk15on</artifactId>
+       <version>${bouncycastle.version}</version>
      </dependency>
+      <dependency>
+        <groupId>org.bouncycastle</groupId>
+        <artifactId>bcpkix-jdk15on</artifactId>
+        <version>${bouncycastle.version}</version>
+      </dependency>
 
      <dependency>
         <groupId>joda-time</groupId>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
index d21b149..2656215 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
@@ -139,7 +139,7 @@
     </dependency>
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
index 0fec136..b7e80f2 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml
@@ -177,7 +177,7 @@
     <!-- 'mvn dependency:analyze' fails to detect use of this dependency -->
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
index 9c05150..9472e43 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml
@@ -127,7 +127,7 @@
     </dependency>
     <dependency>
       <groupId>org.bouncycastle</groupId>
-      <artifactId>bcprov-jdk16</artifactId>
+      <artifactId>bcprov-jdk15on</artifactId>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
index 83ae355..b8a7d92 100644
--- a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
+++ b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml
@@ -115,6 +115,14 @@
       <artifactId>jersey-test-framework-grizzly2</artifactId>
       <scope>test</scope>
     </dependency>
+    <dependency>
+      <groupId>org.bouncycastle</groupId>
+      <artifactId>bcprov-jdk15on</artifactId>
+    </dependency>
+    <dependency>
+      <groupId>org.bouncycastle</groupId>
+      <artifactId>bcpkix-jdk15on</artifactId>
+    </dependency>
 
   </dependencies>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 05/07: HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time instead of UTC.

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 3b3c0c4b8790ec4c96072a4704b320812296074b
Author: bilaharith <th...@gmail.com>
AuthorDate: Thu Aug 8 19:08:04 2019 +0100

    HADOOP-16479. ABFS FileStatus.getModificationTime returns localized time instead of UTC.
    
    Contributed by Bilahari T H
    
    Change-Id: I532055baaadfd7c324710e4b25f60cdf0378bdc0
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystemStore.java   |  2 +-
 .../azurebfs/ITestAzureBlobFileSystemFileStatus.java   | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index 06a819a..ce0d411 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -115,7 +115,7 @@ public class AzureBlobFileSystemStore {
   private URI uri;
   private String userName;
   private String primaryUserGroup;
-  private static final String DATE_TIME_PATTERN = "E, dd MMM yyyy HH:mm:ss 'GMT'";
+  private static final String DATE_TIME_PATTERN = "E, dd MMM yyyy HH:mm:ss z";
   private static final String TOKEN_DATE_PATTERN = "yyyy-MM-dd'T'HH:mm:ss.SSSSSSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
index f514696..421fa9a 100644
--- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java
@@ -122,4 +122,22 @@ public class ITestAzureBlobFileSystemFileStatus extends
     assertEquals(pathWithHost2.getName(), fileStatus2.getPath().getName());
   }
 
+  @Test
+  public void testLastModifiedTime() throws IOException {
+    AzureBlobFileSystem fs = this.getFileSystem();
+    Path testFilePath = new Path("childfile1.txt");
+    long createStartTime = System.currentTimeMillis();
+    long minCreateStartTime = (createStartTime / 1000) * 1000 - 1;
+    //  Dividing and multiplying by 1000 to make last 3 digits 0.
+    //  It is observed that modification time is returned with last 3
+    //  digits 0 always.
+    fs.create(testFilePath);
+    long createEndTime = System.currentTimeMillis();
+    FileStatus fStat = fs.getFileStatus(testFilePath);
+    long lastModifiedTime = fStat.getModificationTime();
+    assertTrue("lastModifiedTime should be after minCreateStartTime",
+        minCreateStartTime < lastModifiedTime);
+    assertTrue("lastModifiedTime should be before createEndTime",
+        createEndTime > lastModifiedTime);
+  }
 }


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 06/07: HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 9d722c637eb863aeaf05bf3b528ab8dc32470eb7
Author: Sneha Vijayarajan <sn...@microsoft.com>
AuthorDate: Tue Jul 30 15:18:15 2019 +0000

    HADOOP-16460: ABFS: fix for Sever Name Indication (SNI)
    
    Contributed by Sneha Vijayarajan <sn...@microsoft.com>
---
 hadoop-project/pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hadoop-project/pom.xml b/hadoop-project/pom.xml
index 5c03ad5..b096f93 100644
--- a/hadoop-project/pom.xml
+++ b/hadoop-project/pom.xml
@@ -1249,7 +1249,7 @@
       <dependency>
         <groupId>org.wildfly.openssl</groupId>
         <artifactId>wildfly-openssl</artifactId>
-        <version>1.0.4.Final</version>
+        <version>1.0.7.Final</version>
       </dependency>
 
       <dependency>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 02/07: HADOOP-16269. ABFS: add listFileStatus with StartFrom.

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit dd636127e9c3d80885b228631b44d7a1bc83ab8c
Author: Da Zhou <da...@microsoft.com>
AuthorDate: Wed May 8 17:20:46 2019 +0100

    HADOOP-16269. ABFS: add listFileStatus with StartFrom.
    
    Author:    Da Zhou
---
 .../fs/azurebfs/AzureBlobFileSystemStore.java      | 107 ++++++++++++++-
 .../fs/azurebfs/constants/AbfsHttpConstants.java   |   9 ++
 .../org/apache/hadoop/fs/azurebfs/utils/CRC64.java |  60 ++++++++
 .../fs/azurebfs/AbstractAbfsIntegrationTest.java   |   7 +-
 ...zureBlobFileSystemStoreListStatusWithRange.java | 151 +++++++++++++++++++++
 .../apache/hadoop/fs/azurebfs/TestAbfsCrc64.java   |  38 ++++++
 6 files changed, 363 insertions(+), 9 deletions(-)

diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
index bfab487..06a819a 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
@@ -31,6 +31,7 @@ import java.nio.charset.CharacterCodingException;
 import java.nio.charset.Charset;
 import java.nio.charset.CharsetDecoder;
 import java.nio.charset.CharsetEncoder;
+import java.nio.charset.StandardCharsets;
 import java.text.ParseException;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -46,6 +47,7 @@ import java.util.Set;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
+import com.google.common.base.Strings;
 
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
@@ -79,6 +81,7 @@ import org.apache.hadoop.fs.azurebfs.services.AuthType;
 import org.apache.hadoop.fs.azurebfs.services.ExponentialRetryPolicy;
 import org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials;
 import org.apache.hadoop.fs.azurebfs.utils.Base64;
+import org.apache.hadoop.fs.azurebfs.utils.CRC64;
 import org.apache.hadoop.fs.azurebfs.utils.UriUtils;
 import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
@@ -89,7 +92,17 @@ import org.apache.http.client.utils.URIBuilder;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_EQUALS;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_FORWARD_SLASH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_HYPHEN;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_PLUS;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_STAR;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.CHAR_UNDERSCORE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.ROOT_PATH;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.SINGLE_WHITE_SPACE;
+import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.TOKEN_VERSION;
 import static org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.AZURE_ABFS_ENDPOINT;
+
 /**
  * Provides the bridging logic between Hadoop's abstract filesystem and Azure Storage.
  */
@@ -103,6 +116,7 @@ public class AzureBlobFileSystemStore {
   private String userName;
   private String primaryUserGroup;
   private static final String DATE_TIME_PATTERN = "E, dd MMM yyyy HH:mm:ss 'GMT'";
+  private static final String TOKEN_DATE_PATTERN = "yyyy-MM-dd'T'HH:mm:ss.SSSSSSS'Z'";
   private static final String XMS_PROPERTIES_ENCODING = "ISO-8859-1";
   private static final int LIST_MAX_RESULTS = 500;
 
@@ -514,15 +528,43 @@ public class AzureBlobFileSystemStore {
             eTag);
   }
 
+  /**
+   * @param path The list path.
+   * @return the entries in the path.
+   * */
   public FileStatus[] listStatus(final Path path) throws IOException {
-    LOG.debug("listStatus filesystem: {} path: {}",
+    return listStatus(path, null);
+  }
+
+  /**
+   * @param path Path the list path.
+   * @param startFrom the entry name that list results should start with.
+   *                  For example, if folder "/folder" contains four files: "afile", "bfile", "hfile", "ifile".
+   *                  Then listStatus(Path("/folder"), "hfile") will return "/folder/hfile" and "folder/ifile"
+   *                  Notice that if startFrom is a non-existent entry name, then the list response contains
+   *                  all entries after this non-existent entry in lexical order:
+   *                  listStatus(Path("/folder"), "cfile") will return "/folder/hfile" and "/folder/ifile".
+   *
+   * @return the entries in the path start from  "startFrom" in lexical order.
+   * */
+  @InterfaceStability.Unstable
+  public FileStatus[] listStatus(final Path path, final String startFrom) throws IOException {
+    LOG.debug("listStatus filesystem: {} path: {}, startFrom: {}",
             client.getFileSystem(),
-           path);
+            path,
+            startFrom);
 
-    String relativePath = path.isRoot() ? AbfsHttpConstants.EMPTY_STRING : getRelativePath(path);
+    final String relativePath = path.isRoot() ? AbfsHttpConstants.EMPTY_STRING : getRelativePath(path);
     String continuation = null;
-    ArrayList<FileStatus> fileStatuses = new ArrayList<>();
 
+    // generate continuation token if a valid startFrom is provided.
+    if (startFrom != null && !startFrom.isEmpty()) {
+      continuation = getIsNamespaceEnabled()
+              ? generateContinuationTokenForXns(startFrom)
+              : generateContinuationTokenForNonXns(path.isRoot() ? ROOT_PATH : relativePath, startFrom);
+    }
+
+    ArrayList<FileStatus> fileStatuses = new ArrayList<>();
     do {
       AbfsRestOperation op = client.listPath(relativePath, false, LIST_MAX_RESULTS, continuation);
       continuation = op.getResult().getResponseHeader(HttpHeaderConfigurations.X_MS_CONTINUATION);
@@ -575,6 +617,61 @@ public class AzureBlobFileSystemStore {
     return fileStatuses.toArray(new FileStatus[fileStatuses.size()]);
   }
 
+  // generate continuation token for xns account
+  private String generateContinuationTokenForXns(final String firstEntryName) {
+    Preconditions.checkArgument(!Strings.isNullOrEmpty(firstEntryName)
+            && !firstEntryName.startsWith(AbfsHttpConstants.ROOT_PATH),
+            "startFrom must be a dir/file name and it can not be a full path");
+
+    StringBuilder sb = new StringBuilder();
+    sb.append(firstEntryName).append("#$").append("0");
+
+    CRC64 crc64 = new CRC64();
+    StringBuilder token = new StringBuilder();
+    token.append(crc64.compute(sb.toString().getBytes(StandardCharsets.UTF_8)))
+            .append(SINGLE_WHITE_SPACE)
+            .append("0")
+            .append(SINGLE_WHITE_SPACE)
+            .append(firstEntryName);
+
+    return Base64.encode(token.toString().getBytes(StandardCharsets.UTF_8));
+  }
+
+  // generate continuation token for non-xns account
+  private String generateContinuationTokenForNonXns(final String path, final String firstEntryName) {
+    Preconditions.checkArgument(!Strings.isNullOrEmpty(firstEntryName)
+            && !firstEntryName.startsWith(AbfsHttpConstants.ROOT_PATH),
+            "startFrom must be a dir/file name and it can not be a full path");
+
+    // Notice: non-xns continuation token requires full path (first "/" is not included) for startFrom
+    final String startFrom = (path.isEmpty() || path.equals(ROOT_PATH))
+            ? firstEntryName
+            : path + ROOT_PATH + firstEntryName;
+
+    SimpleDateFormat simpleDateFormat = new SimpleDateFormat(TOKEN_DATE_PATTERN, Locale.US);
+    String date = simpleDateFormat.format(new Date());
+    String token = String.format("%06d!%s!%06d!%s!%06d!%s!",
+            path.length(), path, startFrom.length(), startFrom, date.length(), date);
+    String base64EncodedToken = Base64.encode(token.getBytes(StandardCharsets.UTF_8));
+
+    StringBuilder encodedTokenBuilder = new StringBuilder(base64EncodedToken.length() + 5);
+    encodedTokenBuilder.append(String.format("%s!%d!", TOKEN_VERSION, base64EncodedToken.length()));
+
+    for (int i = 0; i < base64EncodedToken.length(); i++) {
+      char current = base64EncodedToken.charAt(i);
+      if (CHAR_FORWARD_SLASH == current) {
+        current = CHAR_UNDERSCORE;
+      } else if (CHAR_PLUS == current) {
+        current = CHAR_STAR;
+      } else if (CHAR_EQUALS == current) {
+        current = CHAR_HYPHEN;
+      }
+      encodedTokenBuilder.append(current);
+    }
+
+    return encodedTokenBuilder.toString();
+  }
+
   public void setOwner(final Path path, final String owner, final String group) throws
           AzureBlobFileSystemException {
     if (!getIsNamespaceEnabled()) {
@@ -992,7 +1089,7 @@ public class AzureBlobFileSystemStore {
 
       FileStatus other = (FileStatus) obj;
 
-      if (!other.equals(this)) {// compare the path
+      if (!this.getPath().equals(other.getPath())) {// compare the path
         return false;
       }
 
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
index 1f35854..e85c7f0 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/AbfsHttpConstants.java
@@ -39,6 +39,7 @@ public final class AbfsHttpConstants {
   public static final String GET_ACCESS_CONTROL = "getAccessControl";
   public static final String GET_STATUS = "getStatus";
   public static final String DEFAULT_TIMEOUT = "90";
+  public static final String TOKEN_VERSION = "2";
 
   public static final String JAVA_VERSION = "java.version";
   public static final String OS_NAME = "os.name";
@@ -91,5 +92,13 @@ public final class AbfsHttpConstants {
   public static final String PERMISSION_FORMAT = "%04d";
   public static final String SUPER_USER = "$superuser";
 
+  public static final char CHAR_FORWARD_SLASH = '/';
+  public static final char CHAR_EXCLAMATION_POINT = '!';
+  public static final char CHAR_UNDERSCORE = '_';
+  public static final char CHAR_HYPHEN = '-';
+  public static final char CHAR_EQUALS = '=';
+  public static final char CHAR_STAR = '*';
+  public static final char CHAR_PLUS = '+';
+
   private AbfsHttpConstants() {}
 }
diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java
new file mode 100644
index 0000000..9790744
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CRC64.java
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs.utils;
+
+/**
+ * CRC64 implementation for AzureBlobFileSystem.
+ */
+public class CRC64 {
+
+  private static final long POLY = 0x9a6c9329ac4bc9b5L;
+  private static final int TABLE_LENGTH = 256;
+  private static final long[] TABLE = new long[TABLE_LENGTH];
+
+  private long value = -1;
+
+  /**
+   * @param input byte arrays.
+   * @return long value of the CRC-64 checksum of the data.
+   * */
+  public long compute(byte[] input) {
+    init();
+    for (int i = 0; i < input.length; i++) {
+      value = TABLE[(input[i] ^ (int) value) & 0xFF] ^ (value >>> 8);
+    }
+    return ~value;
+  }
+
+  /*
+   * Initialize a table constructed from POLY (0x9a6c9329ac4bc9b5L).
+   * */
+  private void init() {
+    value = -1;
+    for (int n = 0; n < TABLE_LENGTH; ++n) {
+      long crc = n;
+      for (int i = 0; i < 8; ++i) {
+        if ((crc & 1) == 1) {
+          crc = (crc >>> 1) ^ POLY;
+        } else {
+          crc >>>= 1;
+        }
+      }
+      TABLE[n] = crc;
+    }
+  }
+}
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
index cb9549d..04be4f4 100644
--- a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java
@@ -24,7 +24,6 @@ import java.util.Hashtable;
 import java.util.UUID;
 import java.util.concurrent.Callable;
 
-import com.google.common.base.Preconditions;
 import org.junit.After;
 import org.junit.Before;
 import org.slf4j.Logger;
@@ -210,9 +209,9 @@ public abstract class AbstractAbfsIntegrationTest extends
    * @throws IOException failure during create/init.
    */
   public AzureBlobFileSystem createFileSystem() throws IOException {
-    Preconditions.checkState(abfs == null,
-        "existing ABFS instance exists: %s", abfs);
-    abfs = (AzureBlobFileSystem) FileSystem.newInstance(rawConfig);
+    if (abfs == null) {
+      abfs = (AzureBlobFileSystem) FileSystem.newInstance(rawConfig);
+    }
     return abfs;
   }
 
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java
new file mode 100644
index 0000000..849bb6b
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemStoreListStatusWithRange.java
@@ -0,0 +1,151 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.IOException;
+import java.util.Arrays;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+
+/**
+ * Test AzureBlobFileSystemStore listStatus with startFrom.
+ * */
+@RunWith(Parameterized.class)
+public class ITestAzureBlobFileSystemStoreListStatusWithRange extends
+        AbstractAbfsIntegrationTest {
+  private static final boolean SUCCEED = true;
+  private static final boolean FAIL = false;
+  private static final String[] SORTED_ENTRY_NAMES = {"1_folder", "A0", "D01", "a+", "c0", "name5"};
+
+  private AzureBlobFileSystemStore store;
+  private AzureBlobFileSystem fs;
+
+  @Parameterized.Parameter
+  public String path;
+
+  /**
+   * A valid startFrom for listFileStatus with range is a non-fully qualified dir/file name
+   * */
+  @Parameterized.Parameter(1)
+  public String startFrom;
+
+  @Parameterized.Parameter(2)
+  public int expectedStartIndexInArray;
+
+  @Parameterized.Parameter(3)
+  public boolean expectedResult;
+
+  @Parameterized.Parameters(name = "Testing path \"{0}\", startFrom: \"{1}\",  Expecting result : {3}") // Test path
+  public static Iterable<Object[]> params() {
+    return Arrays.asList(
+            new Object[][]{
+                    // case 0: list in root,  without range
+                    {"/", null, 0, SUCCEED},
+
+                    // case 1: list in the root, start from the second file
+                    {"/", SORTED_ENTRY_NAMES[1], 1, SUCCEED},
+
+                    // case 2: list in the root, invalid startFrom
+                    {"/", "/", -1, FAIL},
+
+                    // case 3: list in non-root level, valid startFrom : dir name
+                    {"/" + SORTED_ENTRY_NAMES[2], SORTED_ENTRY_NAMES[1], 1, SUCCEED},
+
+                    // case 4: list in non-root level, valid startFrom : file name
+                    {"/" + SORTED_ENTRY_NAMES[2], SORTED_ENTRY_NAMES[2], 2, SUCCEED},
+
+                    // case 5: list in non root level, invalid startFrom
+                    {"/" + SORTED_ENTRY_NAMES[2], "/" + SORTED_ENTRY_NAMES[3], -1, FAIL},
+
+                    // case 6: list using non existent startFrom, startFrom is smaller than the entries in lexical order
+                    //          expecting return all entries
+                    {"/" + SORTED_ENTRY_NAMES[2], "0-non-existent", 0, SUCCEED},
+
+                    // case 7: list using non existent startFrom, startFrom is larger than the entries in lexical order
+                    //         expecting return 0 entries
+                    {"/" + SORTED_ENTRY_NAMES[2], "z-non-existent", -1, SUCCEED},
+
+                    // case 8: list using non existent startFrom, startFrom is in the range
+                    {"/" + SORTED_ENTRY_NAMES[2], "A1", 2, SUCCEED}
+            });
+  }
+
+  public ITestAzureBlobFileSystemStoreListStatusWithRange() throws Exception {
+    super();
+    if (this.getFileSystem() == null) {
+      super.createFileSystem();
+    }
+    fs = this.getFileSystem();
+    store = fs.getAbfsStore();
+    prepareTestFiles();
+    // Sort the names for verification, ABFS service should return the results in order.
+    Arrays.sort(SORTED_ENTRY_NAMES);
+  }
+
+  @Test
+  public void testListWithRange() throws IOException {
+    try {
+      FileStatus[] listResult = store.listStatus(new Path(path), startFrom);
+      if (!expectedResult) {
+        Assert.fail("Excepting failure with IllegalArgumentException");
+      }
+      verifyFileStatus(listResult, new Path(path), expectedStartIndexInArray);
+    } catch (IllegalArgumentException ex) {
+      if (expectedResult) {
+        Assert.fail("Excepting success");
+      }
+    }
+  }
+
+  // compare the file status
+  private void verifyFileStatus(FileStatus[] listResult, Path parentPath, int startIndexInSortedName) throws IOException {
+    if (startIndexInSortedName == -1) {
+      Assert.assertEquals("Expected empty FileStatus array", 0, listResult.length);
+      return;
+    }
+
+    FileStatus[] allFileStatuses = fs.listStatus(parentPath);
+    Assert.assertEquals("number of dir/file doesn't match",
+            SORTED_ENTRY_NAMES.length, allFileStatuses.length);
+    int indexInResult = 0;
+    for (int index = startIndexInSortedName; index < SORTED_ENTRY_NAMES.length; index++) {
+      Assert.assertEquals("fileStatus doesn't match", allFileStatuses[index], listResult[indexInResult++]);
+    }
+  }
+
+  private void prepareTestFiles() throws IOException {
+    final AzureBlobFileSystem fs = getFileSystem();
+    // created 2 level file structures
+    for (String levelOneFolder : SORTED_ENTRY_NAMES) {
+      Path levelOnePath = new Path("/" + levelOneFolder);
+      Assert.assertTrue(fs.mkdirs(levelOnePath));
+      for (String fileName : SORTED_ENTRY_NAMES) {
+        Path filePath = new Path(levelOnePath, fileName);
+        ContractTestUtils.touch(fs, filePath);
+        ContractTestUtils.assertIsFile(fs, filePath);
+      }
+    }
+  }
+}
diff --git a/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java
new file mode 100644
index 0000000..ab39750
--- /dev/null
+++ b/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsCrc64.java
@@ -0,0 +1,38 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.azurebfs;
+
+import org.junit.Assert;
+import org.junit.Test;
+
+import org.apache.hadoop.fs.azurebfs.utils.CRC64;
+/**
+ * Test for Crc64 in AzureBlobFileSystem, notice that ABFS CRC64 has its own polynomial.
+ * */
+public class TestAbfsCrc64 {
+
+  @Test
+  public void tesCrc64Compute() {
+    CRC64 crc64 = new CRC64();
+    final String[] testStr = {"#$", "dir_2_ac83abee", "dir_42_976df1f5"};
+    final String[] expected = {"f91f7e6a837dbfa8", "203f9fefc38ae97b", "cc0d56eafe58a855"};
+    for (int i = 0; i < testStr.length; i++) {
+      Assert.assertEquals(expected[i], Long.toHexString(crc64.compute(testStr[i].getBytes())));
+    }
+  }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 01/07: HADOOP-16163. NPE in setup/teardown of ITestAbfsDelegationTokens.

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit 006ae258b3b8e95cfa4a4f6e16b9d56f9149be12
Author: Da Zhou <da...@microsoft.com>
AuthorDate: Tue Mar 5 14:01:21 2019 +0000

    HADOOP-16163. NPE in setup/teardown of ITestAbfsDelegationTokens.
    
    Contributed by Da Zhou.
    
    Signed-off-by: Steve Loughran <st...@apache.org>
---
 hadoop-tools/hadoop-azure/pom.xml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hadoop-tools/hadoop-azure/pom.xml b/hadoop-tools/hadoop-azure/pom.xml
index 01562fd..832fa95 100644
--- a/hadoop-tools/hadoop-azure/pom.xml
+++ b/hadoop-tools/hadoop-azure/pom.xml
@@ -566,6 +566,7 @@
                     <exclude>**/azurebfs/ITestAzureBlobFileSystemE2EScale.java</exclude>
                     <exclude>**/azurebfs/ITestAbfsReadWriteAndSeek.java</exclude>
                     <exclude>**/azurebfs/ITestAzureBlobFileSystemListStatus.java</exclude>
+                    <exclude>**/azurebfs/extensions/ITestAbfsDelegationTokens.java</exclude>
                   </excludes>
 
                 </configuration>
@@ -604,6 +605,7 @@
                     <include>**/azurebfs/ITestAzureBlobFileSystemE2EScale.java</include>
                     <include>**/azurebfs/ITestAbfsReadWriteAndSeek.java</include>
                     <include>**/azurebfs/ITestAzureBlobFileSystemListStatus.java</include>
+                    <include>**/azurebfs/extensions/ITestAbfsDelegationTokens.java</include>
                   </includes>
                 </configuration>
               </execution>


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 04/07: HADOOP-16340. ABFS driver continues to retry on IOException responses from REST operations.

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit ce23e971b427b561e10c93c88ceade9cc9efa190
Author: Robert Levas <rl...@cloudera.com>
AuthorDate: Wed Jun 19 17:43:14 2019 +0100

    HADOOP-16340. ABFS driver continues to retry on IOException responses from REST operations.
    
    Contributed by Robert Levas.
    
    This makes the HttpException constructor protected rather than public, so it is possible
    to implement custom subclasses of this exception -exceptions which will not be retried.
    
    Change-Id: Ie8aaa23a707233c2db35948784908b6778ff3a8f
---
 .../org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java    | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
index df7b199..1d3a122 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/AzureADAuthenticator.java
@@ -164,6 +164,8 @@ public final class AzureADAuthenticator {
    * requestId and error message, it is thrown when AzureADAuthenticator
    * failed to get the Azure Active Directory token.
    */
+  @InterfaceAudience.LimitedPrivate("authorization-subsystems")
+  @InterfaceStability.Unstable
   public static class HttpException extends IOException {
     private int httpErrorCode;
     private String requestId;
@@ -184,7 +186,7 @@ public final class AzureADAuthenticator {
       return this.requestId;
     }
 
-    HttpException(int httpErrorCode, String requestId, String message) {
+    protected HttpException(int httpErrorCode, String requestId, String message) {
       super(message);
       this.httpErrorCode = httpErrorCode;
       this.requestId = requestId;


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org


[hadoop] 03/07: HADOOP-16376. ABFS: Override access() to no-op.

Posted by tm...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tmarquardt pushed a commit to branch branch-3.2
in repository https://gitbox.apache.org/repos/asf/hadoop.git

commit a6d50a90542f1cb141f45b24d864cae42c2c2274
Author: Da Zhou <da...@microsoft.com>
AuthorDate: Sun Jun 16 19:20:46 2019 +0100

    HADOOP-16376. ABFS: Override access() to no-op.
    
    Contributed by Da Zhou.
    
    Change-Id: Ia0024bba32250189a87eb6247808b2473c331ed0
---
 .../hadoop/fs/azurebfs/AzureBlobFileSystem.java    | 23 ++++++++++++++++++++--
 1 file changed, 21 insertions(+), 2 deletions(-)

diff --git a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
index e321e9e..1663ed9 100644
--- a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
+++ b/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
@@ -38,12 +38,12 @@ import java.util.concurrent.Future;
 
 import com.google.common.annotations.VisibleForTesting;
 import com.google.common.base.Preconditions;
-import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
-import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import org.apache.commons.lang3.ArrayUtils;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClient;
+import org.apache.hadoop.fs.azurebfs.services.AbfsClientThrottlingIntercept;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.BlockLocation;
@@ -70,6 +70,7 @@ import org.apache.hadoop.fs.permission.AclEntry;
 import org.apache.hadoop.fs.permission.AclStatus;
 import org.apache.hadoop.fs.permission.FsAction;
 import org.apache.hadoop.fs.permission.FsPermission;
+import org.apache.hadoop.security.AccessControlException;
 import org.apache.hadoop.security.token.Token;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.util.Progressable;
@@ -839,6 +840,24 @@ public class AzureBlobFileSystem extends FileSystem {
     }
   }
 
+  /**
+   * Checks if the user can access a path.  The mode specifies which access
+   * checks to perform.  If the requested permissions are granted, then the
+   * method returns normally.  If access is denied, then the method throws an
+   * {@link AccessControlException}.
+   *
+   * @param path Path to check
+   * @param mode type of access to check
+   * @throws AccessControlException        if access is denied
+   * @throws java.io.FileNotFoundException if the path does not exist
+   * @throws IOException                   see specific implementation
+   */
+  @Override
+  public void access(final Path path, FsAction mode) throws IOException {
+    // TODO: make it no-op to unblock hive permission issue for now.
+    // Will add a long term fix similar to the implementation in AdlFileSystem.
+  }
+
   private FileStatus tryGetFileStatus(final Path f) {
     try {
       return getFileStatus(f);


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-commits-help@hadoop.apache.org