You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Ctest (Jira)" <ji...@apache.org> on 2020/04/11 16:39:00 UTC

[jira] [Comment Edited] (HADOOP-16958) NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL

    [ https://issues.apache.org/jira/browse/HADOOP-16958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17081370#comment-17081370 ] 

Ctest edited comment on HADOOP-16958 at 4/11/20, 4:38 PM:
----------------------------------------------------------

The tests failed because I moved the check policy==null to the start of ZKFCRpcServer and did not check whether HADOOP_SECURITY_AUTHORIZATION is true. Policy should not be null if HADOOP_SECURITY_AUTHORIZATION is true.

There is a check HADOOP_SECURITY_AUTHORIZATION==true at the end of ZKFCRpcServer but not at the start. Should I still move the policy==null check back to the end? Or add an additional check that for  HADOOP_SECURITY_AUTHORIZATION==true and policy==null at the starting, like below?

 
{code:java}
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java

   ZKFCRpcServer(Configuration conf,
       InetSocketAddress bindAddr,
       ZKFailoverController zkfc,
-      PolicyProvider policy) throws IOException {
+      PolicyProvider policy) throws IOException, HadoopIllegalArgumentException {
+    boolean securityAuthorizationEnabled = conf.getBoolean(
+            CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION,
+            false);
+    if (securityAuthorizationEnabled && policy == null) {
+      throw new HadoopIllegalArgumentException(
+              CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION
+                      + "is configured to true but service-level"
+                      + "authorization security policy is null.");
+    }
+
     this.zkfc = zkfc;
     
     RPC.setProtocolEngine(conf, ZKFCProtocolPB.class,
@@ -61,8 +73,7 @@
         .setVerbose(false).build();
     
     // set service-level authorization security policy
-    if (conf.getBoolean(
-        CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
+    if (securityAuthorizationEnabled) {
       server.refreshServiceAcl(conf, policy);
     }

{code}


was (Author: ctest.team):
The tests failed because I moved the check policy==null to the start of ZKFCRpcServer and did not check whether HADOOP_SECURITY_AUTHORIZATION is true. Policy should not be null if HADOOP_SECURITY_AUTHORIZATION is true.

There is a check HADOOP_SECURITY_AUTHORIZATION==true at the end of ZKFCRpcServer but not at the start. Should I still move the policy==null check back to the end? Or add an additional check that for  HADOOP_SECURITY_AUTHORIZATION==true and policy==null at the starting, like below?

 
{code:java}
diff --git a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
index 86dd91ee142..6c49d70a1d7 100644
--- a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
+++ b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ZKFCRpcServer.java
@@ -20,6 +20,8 @@
 import java.io.IOException;
 import java.net.InetSocketAddress;
 
+import com.sun.org.apache.xpath.internal.operations.Bool;
+import org.apache.hadoop.HadoopIllegalArgumentException;
 import org.apache.hadoop.classification.InterfaceAudience;
 import org.apache.hadoop.classification.InterfaceStability;
 import org.apache.hadoop.conf.Configuration;
@@ -46,7 +48,17 @@
   ZKFCRpcServer(Configuration conf,
       InetSocketAddress bindAddr,
       ZKFailoverController zkfc,
-      PolicyProvider policy) throws IOException {
+      PolicyProvider policy) throws IOException, HadoopIllegalArgumentException {
+    boolean securityAuthorizationEnabled = conf.getBoolean(
+            CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION,
+            false);
+    if (securityAuthorizationEnabled && policy == null) {
+      throw new HadoopIllegalArgumentException(
+              CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION
+                      + "is configured to true but service-level"
+                      + "authorization security policy is null.");
+    }
+
     this.zkfc = zkfc;
     
     RPC.setProtocolEngine(conf, ZKFCProtocolPB.class,
@@ -61,8 +73,7 @@
         .setVerbose(false).build();
     
     // set service-level authorization security policy
-    if (conf.getBoolean(
-        CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
+    if (securityAuthorizationEnabled) {
       server.refreshServiceAcl(conf, policy);
     }

{code}

> NullPointerException(NPE) when hadoop.security.authorization is enabled but the input PolicyProvider for ZKFCRpcServer is NULL
> ------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-16958
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16958
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: common, ha
>    Affects Versions: 3.2.1
>            Reporter: Ctest
>            Priority: Critical
>         Attachments: HADOOP-16958.000.patch, HADOOP-16958.001.patch, HADOOP-16958.002.patch
>
>
> During initialization, ZKFCRpcServer refreshes the service authorization ACL for the service handled by this server if config hadoop.security.authorization is enabled, by calling refreshServiceAcl with the input PolicyProvider and Configuration.
> {code:java}
> ZKFCRpcServer(Configuration conf,
>  InetSocketAddress bindAddr,
>  ZKFailoverController zkfc,
>  PolicyProvider policy) throws IOException {
>  this.server = ...
>  
>  // set service-level authorization security policy
>  if (conf.getBoolean(
>  CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION, false)) {
>  server.refreshServiceAcl(conf, policy);
>  }
> }{code}
> refreshServiceAcl calls ServiceAuthorizationManager#refreshWithLoadedConfiguration which directly gets services from the provider with provider.getServices(). When the provider is NULL, the code throws NPE without an informative message. In addition, the default value of config `hadoop.security.authorization.policyprovider` (which controls PolicyProvider here) is NULL and the only usage of ZKFCRpcServer initializer provides only an abstract method getPolicyProvider which does not enforce that PolicyProvider should not be NULL.
> The suggestion here is to either add a guard check or exception handling with an informative logging message on ZKFCRpcServer to handle input PolicyProvider being NULL.
>  
> I am very happy to provide a patch for it if the issue is confirmed :)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org