You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@river.apache.org by "Peter Jones (JIRA)" <ji...@apache.org> on 2007/07/27 23:57:52 UTC
[jira] Created: (RIVER-141) default mux maxFragmentSize has
significant impact on Kerberos performance
default mux maxFragmentSize has significant impact on Kerberos performance
--------------------------------------------------------------------------
Key: RIVER-141
URL: https://issues.apache.org/jira/browse/RIVER-141
Project: River
Issue Type: Improvement
Components: net_jini_jeri
Affects Versions: jtsk_2.0
Reporter: Peter Jones
Priority: Minor
Bugtraq ID [4851548|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4851548]
The default 1Kb {{maxFragmentSize}} in {{MuxClient}} and {{MuxServer}} appears to have a significant impact on Kerberos performance. In a timed experiment in starting up activatable versions of all of our services, with Kerberos configured for everything, the start up time went from ~800 seconds to ~488 seconds when the {{maxFragmentSize}} was changed from 1Kb to 8Kb and the {{bufSize}} in {{KerberosUtil.ConnectionOutputStream}} was changed from 8000 to 8224. Start up time for an equivalent SSL setup appeared to be essentially unchanged.
h4. (Evaluation note:)
OK. I believe that the current 1KB values were chosen (somewhat whimsically) based on what seemed to be not so large as to cause breakup of a fragment (and a reasonable number of headers) into multiple segments with our local systems and network configuration. Clearly, an intervening encryption layer can make other performance considerations dominate.
h4. (Comments note:)
Hmm, hold the fort. The timing in the original description was done with JDK 1.4.1. Using 1.4.2 beta, to first order the change doesn't seem to cause a significant improvement: the total time has decreased to ~50 seconds due to JDK improvements, and the buffering change might at most shave off a second or two.
h4. (Evaluation note:)
The performance impact of the default {{maxFragmentSize}} should be investigated in more detail post-2.0.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (RIVER-141) default mux maxFragmentSize has
significant impact on Kerberos performance
Posted by "Peter Jones (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/RIVER-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Peter Jones updated RIVER-141:
------------------------------
Description:
Bugtraq ID [4851548|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4851548]
The default 1Kb {{maxFragmentSize}} in {{MuxClient}} and {{MuxServer}} appears to have a significant impact on Kerberos performance. In a timed experiment in starting up activatable versions of all of our services, with Kerberos configured for everything, the start up time went from ~800 seconds to ~488 seconds when the {{maxFragmentSize}} was changed from 1Kb to 8Kb and the {{bufSize}} in {{KerberosUtil.ConnectionOutputStream}} was changed from 8000 to 8224. Start up time for an equivalent SSL setup appeared to be essentially unchanged.
h4. ( Evaluation note: )
OK. I believe that the current 1KB values were chosen (somewhat whimsically) based on what seemed to be not so large as to cause breakup of a fragment (and a reasonable number of headers) into multiple segments with our local systems and network configuration. Clearly, an intervening encryption layer can make other performance considerations dominate.
h4. ( Comments note: )
Hmm, hold the fort. The timing in the original description was done with JDK 1.4.1. Using 1.4.2 beta, to first order the change doesn't seem to cause a significant improvement: the total time has decreased to ~50 seconds due to JDK improvements, and the buffering change might at most shave off a second or two.
h4. ( Evaluation note: )
The performance impact of the default {{maxFragmentSize}} should be investigated in more detail post-2.0.
was:
Bugtraq ID [4851548|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4851548]
The default 1Kb {{maxFragmentSize}} in {{MuxClient}} and {{MuxServer}} appears to have a significant impact on Kerberos performance. In a timed experiment in starting up activatable versions of all of our services, with Kerberos configured for everything, the start up time went from ~800 seconds to ~488 seconds when the {{maxFragmentSize}} was changed from 1Kb to 8Kb and the {{bufSize}} in {{KerberosUtil.ConnectionOutputStream}} was changed from 8000 to 8224. Start up time for an equivalent SSL setup appeared to be essentially unchanged.
h4. (Evaluation note:)
OK. I believe that the current 1KB values were chosen (somewhat whimsically) based on what seemed to be not so large as to cause breakup of a fragment (and a reasonable number of headers) into multiple segments with our local systems and network configuration. Clearly, an intervening encryption layer can make other performance considerations dominate.
h4. (Comments note:)
Hmm, hold the fort. The timing in the original description was done with JDK 1.4.1. Using 1.4.2 beta, to first order the change doesn't seem to cause a significant improvement: the total time has decreased to ~50 seconds due to JDK improvements, and the buffering change might at most shave off a second or two.
h4. (Evaluation note:)
The performance impact of the default {{maxFragmentSize}} should be investigated in more detail post-2.0.
> default mux maxFragmentSize has significant impact on Kerberos performance
> --------------------------------------------------------------------------
>
> Key: RIVER-141
> URL: https://issues.apache.org/jira/browse/RIVER-141
> Project: River
> Issue Type: Improvement
> Components: net_jini_jeri
> Affects Versions: jtsk_2.0
> Reporter: Peter Jones
> Priority: Minor
>
> Bugtraq ID [4851548|http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4851548]
> The default 1Kb {{maxFragmentSize}} in {{MuxClient}} and {{MuxServer}} appears to have a significant impact on Kerberos performance. In a timed experiment in starting up activatable versions of all of our services, with Kerberos configured for everything, the start up time went from ~800 seconds to ~488 seconds when the {{maxFragmentSize}} was changed from 1Kb to 8Kb and the {{bufSize}} in {{KerberosUtil.ConnectionOutputStream}} was changed from 8000 to 8224. Start up time for an equivalent SSL setup appeared to be essentially unchanged.
> h4. ( Evaluation note: )
> OK. I believe that the current 1KB values were chosen (somewhat whimsically) based on what seemed to be not so large as to cause breakup of a fragment (and a reasonable number of headers) into multiple segments with our local systems and network configuration. Clearly, an intervening encryption layer can make other performance considerations dominate.
> h4. ( Comments note: )
> Hmm, hold the fort. The timing in the original description was done with JDK 1.4.1. Using 1.4.2 beta, to first order the change doesn't seem to cause a significant improvement: the total time has decreased to ~50 seconds due to JDK improvements, and the buffering change might at most shave off a second or two.
> h4. ( Evaluation note: )
> The performance impact of the default {{maxFragmentSize}} should be investigated in more detail post-2.0.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.