You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Stefan Mayr <st...@mayr-stefan.de> on 2020/12/01 00:17:22 UTC
Re: [EXTERNAL] Re: Bouncy Castle FIPS on RHEL 7.3
Hi,
Am 30.11.2020 um 17:09 schrieb Amit Pande:
> I guess I will have to investigate the RHEL 7.3 entropy issue separately (possibly as hobby project) and look for other options to make progress.
>
> I still find it odd that something related to randomness (entropy generation) is so consistent (the slowness is equally slow or more on multiple RHEL 7.3 systems I have, maybe I need to look for machines from different data center or a physical 7.3 server).
>
> And yes, the 10 year certificate validity is just for testing purposes. 😊
>
> Thank you for your inputs. Indeed helpful in evaluating our choices.
>
> Thanks,
> Amit
you might have a look at rng-tools (rngd) or haveged to boost your
entropy pool.
We use haveged in a VMware virtualized environment and this reduces a
plain tomcat startup from multiple minutes to just a few secondes.
I think Red Hat preferes rngd but there should be some articles on
access.redhat.com to help you depending on the used hypervisor.
Regards,
Stefan
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
RE: [EXTERNAL] Re: Bouncy Castle FIPS on RHEL 7.3
Posted by Amit Pande <Am...@veritas.com>.
Thank you Stefan, Chris for the inputs.
As I understood from our security experts, there is no moving away from /dev/random (anything else isn't (strongly) FIPS compliant) for us.
Thanks,
Amit
-----Original Message-----
From: Christopher Schultz <ch...@christopherschultz.net>
Sent: Wednesday, December 2, 2020 10:29 AM
To: users@tomcat.apache.org
Subject: Re: [EXTERNAL] Re: Bouncy Castle FIPS on RHEL 7.3
Stefan,
On 11/30/20 19:17, Stefan Mayr wrote:
> Hi,
>
> Am 30.11.2020 um 17:09 schrieb Amit Pande:
>> I guess I will have to investigate the RHEL 7.3 entropy issue separately (possibly as hobby project) and look for other options to make progress.
>>
>> I still find it odd that something related to randomness (entropy generation) is so consistent (the slowness is equally slow or more on multiple RHEL 7.3 systems I have, maybe I need to look for machines from different data center or a physical 7.3 server).
>>
>> And yes, the 10 year certificate validity is just for testing
>> purposes. 😊
>>
>> Thank you for your inputs. Indeed helpful in evaluating our choices.
>>
>> Thanks,
>> Amit
>
> you might have a look at rng-tools (rngd) or haveged to boost your
> entropy pool.
>
> We use haveged in a VMware virtualized environment and this reduces a
> plain tomcat startup from multiple minutes to just a few secondes.
>
> I think Red Hat preferes rngd but there should be some articles on
> access.redhat.com to help you depending on the used hypervisor.
I would think long and hard about whether or not you want to use any of these tools. There are already ways to get "a lot of entropy really quickly" from the Linux kernel; specifically, /dev/urandom.
The whole point of both /dev/random and /dev/urandom existing side by side is so that the application can pick whether it wants "high quality entropy" (by using /dev/random) or "good enough randomness" (by using /dev/urandom).
Tools like haveged and rngd basically make /dev/random behave like /dev/urandom so the application can never have "high quality entropy"
even when it asks for it.
Have a look at this discussion on security.stackexchange to get you started down the path to paranoia:
https://security.stackexchange.com/questions/34523
My question has always been "if these things are both safe and a good idea, why does the Linux kernel not implement them directly?" There must be a reason why the kernel devs have decided not to "speed up"
/dev/random using the techniques used by both haveged and rngd. Maybe their argument is essentially "you can always just use haveged/rngd" but my guess is there is a more fundamental reason for not adopting these techniques directly in the kernel.
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Re: [EXTERNAL] Re: Bouncy Castle FIPS on RHEL 7.3
Posted by Christopher Schultz <ch...@christopherschultz.net>.
Stefan,
On 11/30/20 19:17, Stefan Mayr wrote:
> Hi,
>
> Am 30.11.2020 um 17:09 schrieb Amit Pande:
>> I guess I will have to investigate the RHEL 7.3 entropy issue separately (possibly as hobby project) and look for other options to make progress.
>>
>> I still find it odd that something related to randomness (entropy generation) is so consistent (the slowness is equally slow or more on multiple RHEL 7.3 systems I have, maybe I need to look for machines from different data center or a physical 7.3 server).
>>
>> And yes, the 10 year certificate validity is just for testing purposes. 😊
>>
>> Thank you for your inputs. Indeed helpful in evaluating our choices.
>>
>> Thanks,
>> Amit
>
> you might have a look at rng-tools (rngd) or haveged to boost your
> entropy pool.
>
> We use haveged in a VMware virtualized environment and this reduces a
> plain tomcat startup from multiple minutes to just a few secondes.
>
> I think Red Hat preferes rngd but there should be some articles on
> access.redhat.com to help you depending on the used hypervisor.
I would think long and hard about whether or not you want to use any of
these tools. There are already ways to get "a lot of entropy really
quickly" from the Linux kernel; specifically, /dev/urandom.
The whole point of both /dev/random and /dev/urandom existing side by
side is so that the application can pick whether it wants "high quality
entropy" (by using /dev/random) or "good enough randomness" (by using
/dev/urandom).
Tools like haveged and rngd basically make /dev/random behave like
/dev/urandom so the application can never have "high quality entropy"
even when it asks for it.
Have a look at this discussion on security.stackexchange to get you
started down the path to paranoia:
https://security.stackexchange.com/questions/34523
My question has always been "if these things are both safe and a good
idea, why does the Linux kernel not implement them directly?" There must
be a reason why the kernel devs have decided not to "speed up"
/dev/random using the techniques used by both haveged and rngd. Maybe
their argument is essentially "you can always just use haveged/rngd" but
my guess is there is a more fundamental reason for not adopting these
techniques directly in the kernel.
-chris
---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org