You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/01/20 12:06:46 UTC

[GitHub] [hadoop] smarthanwang opened a new pull request #3909: HDFS-16431. Truncate CallerContext in client side.

smarthanwang opened a new pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909


   JIRA https://issues.apache.org/jira/browse/HDFS-16431
   
   ### Description of PR
   
   The context of CallerContext would be truncated  when  it exceeds the maximum allowed length in server side. I think it's better to do check and truncate in client side to reduce the unnecessary overhead of network and memory for NN.
   
   ### How was this patch tested?
   
   Add some UTs to test the context would be truncated correctly .
   
   ### For code changes:
   
   1. Refactor CallerContext.Builder constrctor to simplify usage
   2. Do context truncate before CallerContext.Builder.build()
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1019400952


   The checks failed seems because the unrelated tests failed for timeout, @jojochuang colud you help take a look? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1048392253


   Hi @jojochuang @sodonnel @Hexiaoqiao @ferhui  , could you please help take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1043092837


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 36s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  21m  8s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  6s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 23s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 26s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 27s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 43s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  23m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 50s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/3/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 1 new + 119 unchanged - 2 fixed = 120 total (was 121)  |
   | +1 :green_heart: |  mvnsite  |   4m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  0s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 24s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 13s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  26m 44s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  | 380m 45s |  |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  33m 22s |  |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 671m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3909 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8579feb2e775 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cb5e03a4bfd8bc51a803e149894a9cb9bb8a78da |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/3/testReport/ |
   | Max. process+thread count | 2261 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1017907082


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m  5s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 51s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  20m 33s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 48s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   8m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 28s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m  7s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  26m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 16s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  22m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 59s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/1/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 6 new + 119 unchanged - 2 fixed = 125 total (was 121)  |
   | +1 :green_heart: |  mvnsite  |   4m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 11s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   8m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 13s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  5s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  | 234m 45s |  |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  21m 14s |  |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 508m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3909 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 35a0cc69ab32 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 043e5dd52ca39c01c381caa63c77fb430ff83ab8 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/1/testReport/ |
   | Max. process+thread count | 3365 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1053983477


   > Thanks @smarthanwang involve me here. I am not very familiar with CallerContext. I think @jojochuang should be the good candidate to review it.
   
   @Hexiaoqiao thanks for your reply~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Hexiaoqiao commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
Hexiaoqiao commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1048440490


   Thanks @smarthanwang involve me here. I am not very familiar with CallerContext. I think @jojochuang should be the good candidate to review it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1037808131


   trigger jenkins


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1032257355


   Hi @jojochuang @sodonnel @Hexiaoqiao , could you please help take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on a change in pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on a change in pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#discussion_r789318117



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
##########
@@ -114,114 +120,160 @@ public String toString() {
     return str;
   }
 
-  /** The caller context builder. */
-  public static final class Builder {
+  /**
+   * The caller context related configs.
+   * Use singleton to avoid creating new instance every time.
+   */
+  private static final class Configs {
+
+    /** the maximum length limit for a context string*/
+    private int contextMaxLen;
+    /** the maximum length limit for a signature byte array*/
+    private int signatureMaxLen;
+    /** the filed separator for a context string */
+    private String fieldSeparator;
+
     private static final String KEY_VALUE_SEPARATOR = ":";
+
     /**
      * The illegal separators include '\t', '\n', '='.
-     * User should not set illegal separator.
+     * If the configured separator is illegal, use default
+     * {@link org.apache.hadoop.fs.CommonConfigurationKeysPublic#HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT}.
      */
     private static final Set<String> ILLEGAL_SEPARATORS =
         Collections.unmodifiableSet(
             new HashSet<>(Arrays.asList("\t", "\n", "=")));
-    private final String fieldSeparator;
-    private final StringBuilder sb = new StringBuilder();
-    private byte[] signature;
 
-    public Builder(String context) {
-      this(context, HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
+    private static volatile Configs INSTANCE;
+
+    private Configs(Configuration conf) {
+      this.init(conf);
     }
 
-    public Builder(String context, Configuration conf) {
-      if (isValid(context)) {
-        sb.append(context);
+    private void init(Configuration conf) {
+      this.contextMaxLen = conf.getInt(HADOOP_CALLER_CONTEXT_MAX_SIZE_KEY,
+          HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT);
+      if (this.contextMaxLen < 0) {
+        this.contextMaxLen = HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT;
       }
-      fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
+
+      this.signatureMaxLen =
+          conf.getInt(HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_KEY,
+              HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT);
+      if (this.signatureMaxLen < 0) {
+        this.signatureMaxLen = HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT;
+      }
+
+      this.fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
           HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
-      checkFieldSeparator(fieldSeparator);
+      if (ILLEGAL_SEPARATORS.contains(fieldSeparator)) {
+        this.fieldSeparator = HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT;
+      }
     }
 
-    public Builder(String context, String separator) {
-      if (isValid(context)) {
-        sb.append(context);
+    private static Configs getInstance(Configuration conf) {
+      if (INSTANCE == null) {
+        synchronized (Configs.class) {
+          if (INSTANCE == null) {
+            if (conf == null) {
+              conf = new Configuration();
+            }
+            INSTANCE = new Configs(conf);
+          }
+        }
       }
-      fieldSeparator = separator;
-      checkFieldSeparator(fieldSeparator);
+      return INSTANCE;
     }
+  }
 
-    /**
-     * Check whether the separator is legal.
-     * The illegal separators include '\t', '\n', '='.
-     * Throw IllegalArgumentException if the separator is Illegal.
-     * @param separator the separator of fields.
-     */
-    private void checkFieldSeparator(String separator) {
-      if (ILLEGAL_SEPARATORS.contains(separator)) {
-        throw new IllegalArgumentException("Illegal field separator: "
-            + separator);
+  /** The caller context builder. */
+  public static final class Builder {
+
+    private byte[] signature;
+    private final Configs configs;
+    private final StringBuilder ctx = new StringBuilder();
+
+    public Builder(String context) {
+      this(context, null);
+    }
+
+    public Builder(String context, Configuration conf) {
+      if (isValidFiled(context)) {
+        ctx.append(context);
+      }
+      configs = Configs.getInstance(conf);
+    }
+
+    @VisibleForTesting
+    Builder(String context, Configuration conf, boolean refresh) {
+      this(context, null);
+      if (refresh) {
+        configs.init(conf); // refresh configs for testing
       }
     }
 
     /**
      * Whether the field is valid.
-     * @param field one of the fields in context.
+     * @param field target field.
      * @return true if the field is not null or empty.
      */
-    private boolean isValid(String field) {
+    private boolean isValidFiled(String field) {
       return field != null && field.length() > 0;
     }
 
     public Builder setSignature(byte[] signature) {
-      if (signature != null && signature.length > 0) {
+      if (isSignatureValid(signature)) {
         this.signature = Arrays.copyOf(signature, signature.length);
       }
       return this;
     }
 
     /**
-     * Get the context.
-     * For example, the context is "key1:value1,key2:value2".
-     * @return the valid context or null.
-     */
-    public String getContext() {
-      return sb.length() > 0 ? sb.toString() : null;
-    }
-
-    /**
-     * Get the signature.
-     * @return the signature.
+     * Whether the signature is valid.
+     * @param signature target signature.
+     * @return true if the signature is not null and the length of signature
+     *         is greater than 0 and less than or equal to the length limit.
      */
-    public byte[] getSignature() {
-      return signature;
+    private boolean isSignatureValid(byte[] signature) {
+      return signature != null && signature.length > 0
+          && signature.length <= configs.signatureMaxLen;
     }
 
     /**
-     * Append new field to the context.
+     * Append new field to the context if the filed is not empty and length
+     * of the context is not exceeded limit.
      * @param field one of fields to append.
      * @return the builder.
      */
     public Builder append(String field) {
-      if (isValid(field)) {
-        if (sb.length() > 0) {
-          sb.append(fieldSeparator);
+      if (isValidFiled(field) && !isContextLengthExceedLimit()) {

Review comment:
       > How about logging a warning message here or throw an exception? Makes troubleshooting easier.
   
    Thanks for review, I think logging a warning message seems better, I will add it soon.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1018739271


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 10s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 1 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 30s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  21m 42s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  7s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   7m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  5s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 35s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  23m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 41s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  20m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   3m 54s | [/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/2/artifact/out/results-checkstyle-root.txt) |  root: The patch generated 2 new + 119 unchanged - 2 fixed = 121 total (was 121)  |
   | +1 :green_heart: |  mvnsite  |   4m  8s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m  5s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 24s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   8m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 10s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 17s |  |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  | 377m 27s |  |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  32m 48s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt) |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 658m 54s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.rbfbalance.TestRouterDistCpProcedure |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3909 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 412b4648a089 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da97e441f4aba50a87937010946a4411e9235dad |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/2/testReport/ |
   | Max. process+thread count | 2003 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: . |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3909/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on a change in pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on a change in pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#discussion_r789318117



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
##########
@@ -114,114 +120,160 @@ public String toString() {
     return str;
   }
 
-  /** The caller context builder. */
-  public static final class Builder {
+  /**
+   * The caller context related configs.
+   * Use singleton to avoid creating new instance every time.
+   */
+  private static final class Configs {
+
+    /** the maximum length limit for a context string*/
+    private int contextMaxLen;
+    /** the maximum length limit for a signature byte array*/
+    private int signatureMaxLen;
+    /** the filed separator for a context string */
+    private String fieldSeparator;
+
     private static final String KEY_VALUE_SEPARATOR = ":";
+
     /**
      * The illegal separators include '\t', '\n', '='.
-     * User should not set illegal separator.
+     * If the configured separator is illegal, use default
+     * {@link org.apache.hadoop.fs.CommonConfigurationKeysPublic#HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT}.
      */
     private static final Set<String> ILLEGAL_SEPARATORS =
         Collections.unmodifiableSet(
             new HashSet<>(Arrays.asList("\t", "\n", "=")));
-    private final String fieldSeparator;
-    private final StringBuilder sb = new StringBuilder();
-    private byte[] signature;
 
-    public Builder(String context) {
-      this(context, HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
+    private static volatile Configs INSTANCE;
+
+    private Configs(Configuration conf) {
+      this.init(conf);
     }
 
-    public Builder(String context, Configuration conf) {
-      if (isValid(context)) {
-        sb.append(context);
+    private void init(Configuration conf) {
+      this.contextMaxLen = conf.getInt(HADOOP_CALLER_CONTEXT_MAX_SIZE_KEY,
+          HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT);
+      if (this.contextMaxLen < 0) {
+        this.contextMaxLen = HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT;
       }
-      fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
+
+      this.signatureMaxLen =
+          conf.getInt(HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_KEY,
+              HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT);
+      if (this.signatureMaxLen < 0) {
+        this.signatureMaxLen = HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT;
+      }
+
+      this.fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
           HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
-      checkFieldSeparator(fieldSeparator);
+      if (ILLEGAL_SEPARATORS.contains(fieldSeparator)) {
+        this.fieldSeparator = HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT;
+      }
     }
 
-    public Builder(String context, String separator) {
-      if (isValid(context)) {
-        sb.append(context);
+    private static Configs getInstance(Configuration conf) {
+      if (INSTANCE == null) {
+        synchronized (Configs.class) {
+          if (INSTANCE == null) {
+            if (conf == null) {
+              conf = new Configuration();
+            }
+            INSTANCE = new Configs(conf);
+          }
+        }
       }
-      fieldSeparator = separator;
-      checkFieldSeparator(fieldSeparator);
+      return INSTANCE;
     }
+  }
 
-    /**
-     * Check whether the separator is legal.
-     * The illegal separators include '\t', '\n', '='.
-     * Throw IllegalArgumentException if the separator is Illegal.
-     * @param separator the separator of fields.
-     */
-    private void checkFieldSeparator(String separator) {
-      if (ILLEGAL_SEPARATORS.contains(separator)) {
-        throw new IllegalArgumentException("Illegal field separator: "
-            + separator);
+  /** The caller context builder. */
+  public static final class Builder {
+
+    private byte[] signature;
+    private final Configs configs;
+    private final StringBuilder ctx = new StringBuilder();
+
+    public Builder(String context) {
+      this(context, null);
+    }
+
+    public Builder(String context, Configuration conf) {
+      if (isValidFiled(context)) {
+        ctx.append(context);
+      }
+      configs = Configs.getInstance(conf);
+    }
+
+    @VisibleForTesting
+    Builder(String context, Configuration conf, boolean refresh) {
+      this(context, null);
+      if (refresh) {
+        configs.init(conf); // refresh configs for testing
       }
     }
 
     /**
      * Whether the field is valid.
-     * @param field one of the fields in context.
+     * @param field target field.
      * @return true if the field is not null or empty.
      */
-    private boolean isValid(String field) {
+    private boolean isValidFiled(String field) {
       return field != null && field.length() > 0;
     }
 
     public Builder setSignature(byte[] signature) {
-      if (signature != null && signature.length > 0) {
+      if (isSignatureValid(signature)) {
         this.signature = Arrays.copyOf(signature, signature.length);
       }
       return this;
     }
 
     /**
-     * Get the context.
-     * For example, the context is "key1:value1,key2:value2".
-     * @return the valid context or null.
-     */
-    public String getContext() {
-      return sb.length() > 0 ? sb.toString() : null;
-    }
-
-    /**
-     * Get the signature.
-     * @return the signature.
+     * Whether the signature is valid.
+     * @param signature target signature.
+     * @return true if the signature is not null and the length of signature
+     *         is greater than 0 and less than or equal to the length limit.
      */
-    public byte[] getSignature() {
-      return signature;
+    private boolean isSignatureValid(byte[] signature) {
+      return signature != null && signature.length > 0
+          && signature.length <= configs.signatureMaxLen;
     }
 
     /**
-     * Append new field to the context.
+     * Append new field to the context if the filed is not empty and length
+     * of the context is not exceeded limit.
      * @param field one of fields to append.
      * @return the builder.
      */
     public Builder append(String field) {
-      if (isValid(field)) {
-        if (sb.length() > 0) {
-          sb.append(fieldSeparator);
+      if (isValidFiled(field) && !isContextLengthExceedLimit()) {

Review comment:
       > How about logging a warning message here or throw an exception? Makes troubleshooting easier.
    Thanks for review, I think logging a warning message seems better, I will add it soon.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] smarthanwang commented on pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
smarthanwang commented on pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#issuecomment-1037808131


   trigger jenkins


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] jojochuang commented on a change in pull request #3909: HDFS-16431. Truncate CallerContext in client side.

Posted by GitBox <gi...@apache.org>.
jojochuang commented on a change in pull request #3909:
URL: https://github.com/apache/hadoop/pull/3909#discussion_r789291489



##########
File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java
##########
@@ -114,114 +120,160 @@ public String toString() {
     return str;
   }
 
-  /** The caller context builder. */
-  public static final class Builder {
+  /**
+   * The caller context related configs.
+   * Use singleton to avoid creating new instance every time.
+   */
+  private static final class Configs {
+
+    /** the maximum length limit for a context string*/
+    private int contextMaxLen;
+    /** the maximum length limit for a signature byte array*/
+    private int signatureMaxLen;
+    /** the filed separator for a context string */
+    private String fieldSeparator;
+
     private static final String KEY_VALUE_SEPARATOR = ":";
+
     /**
      * The illegal separators include '\t', '\n', '='.
-     * User should not set illegal separator.
+     * If the configured separator is illegal, use default
+     * {@link org.apache.hadoop.fs.CommonConfigurationKeysPublic#HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT}.
      */
     private static final Set<String> ILLEGAL_SEPARATORS =
         Collections.unmodifiableSet(
             new HashSet<>(Arrays.asList("\t", "\n", "=")));
-    private final String fieldSeparator;
-    private final StringBuilder sb = new StringBuilder();
-    private byte[] signature;
 
-    public Builder(String context) {
-      this(context, HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
+    private static volatile Configs INSTANCE;
+
+    private Configs(Configuration conf) {
+      this.init(conf);
     }
 
-    public Builder(String context, Configuration conf) {
-      if (isValid(context)) {
-        sb.append(context);
+    private void init(Configuration conf) {
+      this.contextMaxLen = conf.getInt(HADOOP_CALLER_CONTEXT_MAX_SIZE_KEY,
+          HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT);
+      if (this.contextMaxLen < 0) {
+        this.contextMaxLen = HADOOP_CALLER_CONTEXT_MAX_SIZE_DEFAULT;
       }
-      fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
+
+      this.signatureMaxLen =
+          conf.getInt(HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_KEY,
+              HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT);
+      if (this.signatureMaxLen < 0) {
+        this.signatureMaxLen = HADOOP_CALLER_CONTEXT_SIGNATURE_MAX_SIZE_DEFAULT;
+      }
+
+      this.fieldSeparator = conf.get(HADOOP_CALLER_CONTEXT_SEPARATOR_KEY,
           HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT);
-      checkFieldSeparator(fieldSeparator);
+      if (ILLEGAL_SEPARATORS.contains(fieldSeparator)) {
+        this.fieldSeparator = HADOOP_CALLER_CONTEXT_SEPARATOR_DEFAULT;
+      }
     }
 
-    public Builder(String context, String separator) {
-      if (isValid(context)) {
-        sb.append(context);
+    private static Configs getInstance(Configuration conf) {
+      if (INSTANCE == null) {
+        synchronized (Configs.class) {
+          if (INSTANCE == null) {
+            if (conf == null) {
+              conf = new Configuration();
+            }
+            INSTANCE = new Configs(conf);
+          }
+        }
       }
-      fieldSeparator = separator;
-      checkFieldSeparator(fieldSeparator);
+      return INSTANCE;
     }
+  }
 
-    /**
-     * Check whether the separator is legal.
-     * The illegal separators include '\t', '\n', '='.
-     * Throw IllegalArgumentException if the separator is Illegal.
-     * @param separator the separator of fields.
-     */
-    private void checkFieldSeparator(String separator) {
-      if (ILLEGAL_SEPARATORS.contains(separator)) {
-        throw new IllegalArgumentException("Illegal field separator: "
-            + separator);
+  /** The caller context builder. */
+  public static final class Builder {
+
+    private byte[] signature;
+    private final Configs configs;
+    private final StringBuilder ctx = new StringBuilder();
+
+    public Builder(String context) {
+      this(context, null);
+    }
+
+    public Builder(String context, Configuration conf) {
+      if (isValidFiled(context)) {
+        ctx.append(context);
+      }
+      configs = Configs.getInstance(conf);
+    }
+
+    @VisibleForTesting
+    Builder(String context, Configuration conf, boolean refresh) {
+      this(context, null);
+      if (refresh) {
+        configs.init(conf); // refresh configs for testing
       }
     }
 
     /**
      * Whether the field is valid.
-     * @param field one of the fields in context.
+     * @param field target field.
      * @return true if the field is not null or empty.
      */
-    private boolean isValid(String field) {
+    private boolean isValidFiled(String field) {
       return field != null && field.length() > 0;
     }
 
     public Builder setSignature(byte[] signature) {
-      if (signature != null && signature.length > 0) {
+      if (isSignatureValid(signature)) {
         this.signature = Arrays.copyOf(signature, signature.length);
       }
       return this;
     }
 
     /**
-     * Get the context.
-     * For example, the context is "key1:value1,key2:value2".
-     * @return the valid context or null.
-     */
-    public String getContext() {
-      return sb.length() > 0 ? sb.toString() : null;
-    }
-
-    /**
-     * Get the signature.
-     * @return the signature.
+     * Whether the signature is valid.
+     * @param signature target signature.
+     * @return true if the signature is not null and the length of signature
+     *         is greater than 0 and less than or equal to the length limit.
      */
-    public byte[] getSignature() {
-      return signature;
+    private boolean isSignatureValid(byte[] signature) {
+      return signature != null && signature.length > 0
+          && signature.length <= configs.signatureMaxLen;
     }
 
     /**
-     * Append new field to the context.
+     * Append new field to the context if the filed is not empty and length
+     * of the context is not exceeded limit.
      * @param field one of fields to append.
      * @return the builder.
      */
     public Builder append(String field) {
-      if (isValid(field)) {
-        if (sb.length() > 0) {
-          sb.append(fieldSeparator);
+      if (isValidFiled(field) && !isContextLengthExceedLimit()) {

Review comment:
       How about logging a warning message here or throw an exception? Makes troubleshooting easier.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org