You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Sergey Shabalov (Jira)" <ji...@apache.org> on 2023/07/25 20:36:00 UTC

[jira] [Created] (HADOOP-18826) REST request fails with "Value for one of the query parameters specified in the request URI is invalid.", 400,

Sergey Shabalov created HADOOP-18826:
----------------------------------------

             Summary: REST request fails with "Value for one of the query parameters specified in the request URI is invalid.", 400,
                 Key: HADOOP-18826
                 URL: https://issues.apache.org/jira/browse/HADOOP-18826
             Project: Hadoop Common
          Issue Type: Bug
          Components: hadoop-thirdparty
    Affects Versions: 3.3.6, 3.3.4, 3.3.3, 3.3.5, 3.3.2, 3.3.1
            Reporter: Sergey Shabalov
         Attachments: test_hadoop-azure-3_3_1-FileSystem_getFileStatus - Copy.zip

I am using hadoop-azure-3.3.0.jar and have written code:
{code:java}
static final String ROOT_DIR = "abfs://ssh-test-fs@sshadlsgen2.dfs.core.windows.net",
Configuration config = new Configuration();        config.set("fs.defaultFS",ROOT_DIR);        config.set("fs.adl.oauth2.access.token.provider.type","ClientCredential");        config.set("fs.adl.oauth2.client.id","");        config.set("fs.adl.oauth2.credential","");        config.set("fs.adl.oauth2.refresh.url","");        config.set("fs.azure.account.key.sshadlsgen2.dfs.core.windows.net",ACCESS_TOKEN);        config.set("fs.azure.skipUserGroupMetadataDuringInitialization","true");
	FileSystem fs = FileSystem.get(config);
	System.out.println( "\nfs:'"+fs.toString()+"'");
	FileStatus status = fs.getFileStatus(new Path(ROOT_DIR)); // !!! Exception in 3.3.1-*
	System.out.println( "\nstatus:'"+status.toString()+"'");
 {code}
It did work properly till 3.3.1. 

But in 3.3.1 it fails with exception:
{code:java}
Caused by: Operation failed: "Value for one of the query parameters specified in the request URI is invalid.", 400, HEAD, https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90  at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:218) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:181) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494) at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:179) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:942) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:924) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:846) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:507) {code}
I performed some research and found:

In hadoop-azure-3.3.0.jar we see:
{code:java}
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore{
	...
	public FileStatus getFileStatus(final Path path) throws IOException {
	...
Line 604:		op = client.getAclStatus(AbfsHttpConstants.FORWARD_SLASH + AbfsHttpConstants.ROOT_PATH);
	...
	}
	...
} {code}
and this code produces REST request:
{code:java}
https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs//?upn=false&action=getAccessControl&timeout=90
  {code}
There is finalizes slash in path part "...ssh-test-fs{*}{color:#de350b}//{color}{*}?upn=false..." This request does work properly.

But since hadoop-azure-3.3.1.jar till latest hadoop-azure-3.3.6.jar we see:
{code:java}
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore {
	...
	public FileStatus getFileStatus(final Path path) throws IOException {
		...
				perfInfo.registerCallee("getAclStatus");
Line 846:       op = client.getAclStatus(getRelativePath(path));
		...
	}
	...
}
Line 1492:
private String getRelativePath(final Path path) {
	...
	return path.toUri().getPath();
} {code}
and this code prduces REST request:
{code:java}
https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90 {code}
There is not finalizes slash in path part "...ssh-test-fs?upn=false..." It happens because the new code "path.toUri().getPath();" produces empty string.

This request fails with message:
{code:java}
Caused by: Operation failed: "Value for one of the query parameters specified in the request URI is invalid.", 400, HEAD, https://sshadlsgen2.dfs.core.windows.net/ssh-test-fs?upn=false&action=getAccessControl&timeout=90
	at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:218)
	at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:181)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465)
	at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:179)
	at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:942)
	at org.apache.hadoop.fs.azurebfs.services.AbfsClient.getAclStatus(AbfsClient.java:924)
	at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:846)
	at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:507) {code}
Such us it is for all hadoop-azure-3.3.*.jar versions which does use log4j 2.* not 1.2.17 we can't update using version

 

I attach a sample of Maven project to try: test_hadoop-azure-3_3_1-FileSystem_getFileStatus - Copy.zip



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org