You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Elliot Crosby-McCullough <el...@freeagent.com> on 2017/09/21 12:58:11 UTC

Idle cluster high CPU usage

Hello,

We've been trying to debug an issue with our kafka cluster for several days
now and we're close to out of options.

We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
nodes, plus a few streams clients and a ruby producer.

Two of the three brokers are pinning a core and have been for days, no
amount of restarting, debugging, or clearing out of data seems to help.

We've got the logs at DEBUG level which shows a constant flow much like
this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb251

As best as we can tell the brokers are up to date on replication and the
leaders are well-balanced.  The cluster is receiving no traffic; no
messages are being sent in and the consumers/streams are shut down.

From our profiling of the JVM it looks like the CPU is mostly working in
replication threads and SSL traffic (it's a secured cluster) but that
shouldn't be treated as gospel.

Any advice would be greatly appreciated.

All the best,
Elliot

Re: Idle cluster high CPU usage

Posted by Ismael Juma <is...@juma.me.uk>.
Thanks for following up Elliot. Good to know. :)

Ismael

On Mon, Sep 25, 2017 at 10:20 AM, Elliot Crosby-McCullough <
elliot.crosby-mccullough@freeagent.com> wrote:

> We did a bunch of sampling to no particular aid, broadly speaking the
> answer was "it's doing a bunch of talking".
>
> For those who might want to know what this was in the end, during part of
> our previous debugging we enabled `javax.net.debug=all` and didn't twig
> that that had no effect on the log4j logs, and didn't notice the vast
> number of iops to `kafkaServer.out`.  Writing that log was eating all the
> CPU.
>
> On 23 September 2017 at 00:44, jrpilat@gmail.com <jr...@gmail.com>
> wrote:
>
> > One thing worth trying is hooking up to 1 or more of the brokers via JMX
> > and examining the running threads;  If that doesn't elucidate the cause,
> > you could move onto sampling or profiling via JMX to see what's taking up
> > all that CPU.
> >
> > - Jordan Pilat
> >
> > On 2017-09-21 07:58, Elliot Crosby-McCullough <elliot.crosby-mccullough@
> > freeagent.com> wrote:
> > > Hello,
> > >
> > > We've been trying to debug an issue with our kafka cluster for several
> > days
> > > now and we're close to out of options.
> > >
> > > We have 3 kafka brokers associated with 3 zookeeper nodes and 3
> registry
> > > nodes, plus a few streams clients and a ruby producer.
> > >
> > > Two of the three brokers are pinning a core and have been for days, no
> > > amount of restarting, debugging, or clearing out of data seems to help.
> > >
> > > We've got the logs at DEBUG level which shows a constant flow much like
> > > this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb2
> 51
> > >
> > > As best as we can tell the brokers are up to date on replication and
> the
> > > leaders are well-balanced.  The cluster is receiving no traffic; no
> > > messages are being sent in and the consumers/streams are shut down.
> > >
> > > From our profiling of the JVM it looks like the CPU is mostly working
> in
> > > replication threads and SSL traffic (it's a secured cluster) but that
> > > shouldn't be treated as gospel.
> > >
> > > Any advice would be greatly appreciated.
> > >
> > > All the best,
> > > Elliot
> > >
> >
>

Re: Idle cluster high CPU usage

Posted by Elliot Crosby-McCullough <el...@freeagent.com>.
We did a bunch of sampling to no particular aid, broadly speaking the
answer was "it's doing a bunch of talking".

For those who might want to know what this was in the end, during part of
our previous debugging we enabled `javax.net.debug=all` and didn't twig
that that had no effect on the log4j logs, and didn't notice the vast
number of iops to `kafkaServer.out`.  Writing that log was eating all the
CPU.

On 23 September 2017 at 00:44, jrpilat@gmail.com <jr...@gmail.com> wrote:

> One thing worth trying is hooking up to 1 or more of the brokers via JMX
> and examining the running threads;  If that doesn't elucidate the cause,
> you could move onto sampling or profiling via JMX to see what's taking up
> all that CPU.
>
> - Jordan Pilat
>
> On 2017-09-21 07:58, Elliot Crosby-McCullough <elliot.crosby-mccullough@
> freeagent.com> wrote:
> > Hello,
> >
> > We've been trying to debug an issue with our kafka cluster for several
> days
> > now and we're close to out of options.
> >
> > We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
> > nodes, plus a few streams clients and a ruby producer.
> >
> > Two of the three brokers are pinning a core and have been for days, no
> > amount of restarting, debugging, or clearing out of data seems to help.
> >
> > We've got the logs at DEBUG level which shows a constant flow much like
> > this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb251
> >
> > As best as we can tell the brokers are up to date on replication and the
> > leaders are well-balanced.  The cluster is receiving no traffic; no
> > messages are being sent in and the consumers/streams are shut down.
> >
> > From our profiling of the JVM it looks like the CPU is mostly working in
> > replication threads and SSL traffic (it's a secured cluster) but that
> > shouldn't be treated as gospel.
> >
> > Any advice would be greatly appreciated.
> >
> > All the best,
> > Elliot
> >
>

Re: Idle cluster high CPU usage

Posted by "jrpilat@gmail.com" <jr...@gmail.com>.
One thing worth trying is hooking up to 1 or more of the brokers via JMX and examining the running threads;  If that doesn't elucidate the cause, you could move onto sampling or profiling via JMX to see what's taking up all that CPU.

- Jordan Pilat

On 2017-09-21 07:58, Elliot Crosby-McCullough <el...@freeagent.com> wrote: 
> Hello,
> 
> We've been trying to debug an issue with our kafka cluster for several days
> now and we're close to out of options.
> 
> We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
> nodes, plus a few streams clients and a ruby producer.
> 
> Two of the three brokers are pinning a core and have been for days, no
> amount of restarting, debugging, or clearing out of data seems to help.
> 
> We've got the logs at DEBUG level which shows a constant flow much like
> this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb251
> 
> As best as we can tell the brokers are up to date on replication and the
> leaders are well-balanced.  The cluster is receiving no traffic; no
> messages are being sent in and the consumers/streams are shut down.
> 
> From our profiling of the JVM it looks like the CPU is mostly working in
> replication threads and SSL traffic (it's a secured cluster) but that
> shouldn't be treated as gospel.
> 
> Any advice would be greatly appreciated.
> 
> All the best,
> Elliot
> 

Re: Idle cluster high CPU usage

Posted by John Yost <ho...@gmail.com>.
Oh wow, okay, not sure what it is then.

On Thu, Sep 21, 2017 at 11:57 AM, Elliot Crosby-McCullough <
elliot.crosby-mccullough@freeagent.com> wrote:

> I cleared out the DB directories so the cluster is empty and no messages
> are being sent or received.
>
> On 21 September 2017 at 16:44, John Yost <ho...@gmail.com> wrote:
>
> > The only thing I can think of is message format...do the client and
> broker
> > versions match? If the clients are a lower version than brokers (i.e.,
> > 0.9.0.1 client, 0.10.0.1 broker), then I think there could be message
> > format conversions both for incoming messages as well as for replication.
> >
> > --John
> >
> > On Thu, Sep 21, 2017 at 10:42 AM, Elliot Crosby-McCullough <
> > elliot.crosby-mccullough@freeagent.com> wrote:
> >
> > > Nothing, that value (that group of values) was at default when we
> started
> > > the debugging.
> > >
> > > On 21 September 2017 at 15:08, Ismael Juma <is...@juma.me.uk> wrote:
> > >
> > > > Thanks. What happens if you reduce num.replica.fetchers?
> > > >
> > > > On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
> > > > elliot.crosby-mccullough@freeagent.com> wrote:
> > > >
> > > > > 551 partitions, broker configs are:
> > > > > https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
> > > > >
> > > > > We tweaked it a bit from standard recently but that was as part of
> > the
> > > > > debugging process.
> > > > >
> > > > > After some more experimentation I'm seeing the same behaviour at
> > about
> > > > half
> > > > > the CPU after creating one 50 partition topic in an otherwise empty
> > > > > cluster.
> > > > >
> > > > > On 21 September 2017 at 14:20, Ismael Juma <is...@juma.me.uk>
> > wrote:
> > > > >
> > > > > > A couple of questions: how many partitions in the cluster and
> what
> > > are
> > > > > your
> > > > > > broker configs?
> > > > > >
> > > > > > On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
> > > > > > elliot.crosby-mccullough@freeagent.com> wrote:
> > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > We've been trying to debug an issue with our kafka cluster for
> > > > several
> > > > > > days
> > > > > > > now and we're close to out of options.
> > > > > > >
> > > > > > > We have 3 kafka brokers associated with 3 zookeeper nodes and 3
> > > > > registry
> > > > > > > nodes, plus a few streams clients and a ruby producer.
> > > > > > >
> > > > > > > Two of the three brokers are pinning a core and have been for
> > days,
> > > > no
> > > > > > > amount of restarting, debugging, or clearing out of data seems
> to
> > > > help.
> > > > > > >
> > > > > > > We've got the logs at DEBUG level which shows a constant flow
> > much
> > > > like
> > > > > > > this: https://gist.github.com/elliotcm/
> > > > e66a1ca838558664bab0c91549acb2
> > > > > 51
> > > > > > >
> > > > > > > As best as we can tell the brokers are up to date on
> replication
> > > and
> > > > > the
> > > > > > > leaders are well-balanced.  The cluster is receiving no
> traffic;
> > no
> > > > > > > messages are being sent in and the consumers/streams are shut
> > down.
> > > > > > >
> > > > > > > From our profiling of the JVM it looks like the CPU is mostly
> > > working
> > > > > in
> > > > > > > replication threads and SSL traffic (it's a secured cluster)
> but
> > > that
> > > > > > > shouldn't be treated as gospel.
> > > > > > >
> > > > > > > Any advice would be greatly appreciated.
> > > > > > >
> > > > > > > All the best,
> > > > > > > Elliot
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Idle cluster high CPU usage

Posted by Elliot Crosby-McCullough <el...@freeagent.com>.
I cleared out the DB directories so the cluster is empty and no messages
are being sent or received.

On 21 September 2017 at 16:44, John Yost <ho...@gmail.com> wrote:

> The only thing I can think of is message format...do the client and broker
> versions match? If the clients are a lower version than brokers (i.e.,
> 0.9.0.1 client, 0.10.0.1 broker), then I think there could be message
> format conversions both for incoming messages as well as for replication.
>
> --John
>
> On Thu, Sep 21, 2017 at 10:42 AM, Elliot Crosby-McCullough <
> elliot.crosby-mccullough@freeagent.com> wrote:
>
> > Nothing, that value (that group of values) was at default when we started
> > the debugging.
> >
> > On 21 September 2017 at 15:08, Ismael Juma <is...@juma.me.uk> wrote:
> >
> > > Thanks. What happens if you reduce num.replica.fetchers?
> > >
> > > On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
> > > elliot.crosby-mccullough@freeagent.com> wrote:
> > >
> > > > 551 partitions, broker configs are:
> > > > https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
> > > >
> > > > We tweaked it a bit from standard recently but that was as part of
> the
> > > > debugging process.
> > > >
> > > > After some more experimentation I'm seeing the same behaviour at
> about
> > > half
> > > > the CPU after creating one 50 partition topic in an otherwise empty
> > > > cluster.
> > > >
> > > > On 21 September 2017 at 14:20, Ismael Juma <is...@juma.me.uk>
> wrote:
> > > >
> > > > > A couple of questions: how many partitions in the cluster and what
> > are
> > > > your
> > > > > broker configs?
> > > > >
> > > > > On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
> > > > > elliot.crosby-mccullough@freeagent.com> wrote:
> > > > >
> > > > > > Hello,
> > > > > >
> > > > > > We've been trying to debug an issue with our kafka cluster for
> > > several
> > > > > days
> > > > > > now and we're close to out of options.
> > > > > >
> > > > > > We have 3 kafka brokers associated with 3 zookeeper nodes and 3
> > > > registry
> > > > > > nodes, plus a few streams clients and a ruby producer.
> > > > > >
> > > > > > Two of the three brokers are pinning a core and have been for
> days,
> > > no
> > > > > > amount of restarting, debugging, or clearing out of data seems to
> > > help.
> > > > > >
> > > > > > We've got the logs at DEBUG level which shows a constant flow
> much
> > > like
> > > > > > this: https://gist.github.com/elliotcm/
> > > e66a1ca838558664bab0c91549acb2
> > > > 51
> > > > > >
> > > > > > As best as we can tell the brokers are up to date on replication
> > and
> > > > the
> > > > > > leaders are well-balanced.  The cluster is receiving no traffic;
> no
> > > > > > messages are being sent in and the consumers/streams are shut
> down.
> > > > > >
> > > > > > From our profiling of the JVM it looks like the CPU is mostly
> > working
> > > > in
> > > > > > replication threads and SSL traffic (it's a secured cluster) but
> > that
> > > > > > shouldn't be treated as gospel.
> > > > > >
> > > > > > Any advice would be greatly appreciated.
> > > > > >
> > > > > > All the best,
> > > > > > Elliot
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: Idle cluster high CPU usage

Posted by John Yost <ho...@gmail.com>.
The only thing I can think of is message format...do the client and broker
versions match? If the clients are a lower version than brokers (i.e.,
0.9.0.1 client, 0.10.0.1 broker), then I think there could be message
format conversions both for incoming messages as well as for replication.

--John

On Thu, Sep 21, 2017 at 10:42 AM, Elliot Crosby-McCullough <
elliot.crosby-mccullough@freeagent.com> wrote:

> Nothing, that value (that group of values) was at default when we started
> the debugging.
>
> On 21 September 2017 at 15:08, Ismael Juma <is...@juma.me.uk> wrote:
>
> > Thanks. What happens if you reduce num.replica.fetchers?
> >
> > On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
> > elliot.crosby-mccullough@freeagent.com> wrote:
> >
> > > 551 partitions, broker configs are:
> > > https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
> > >
> > > We tweaked it a bit from standard recently but that was as part of the
> > > debugging process.
> > >
> > > After some more experimentation I'm seeing the same behaviour at about
> > half
> > > the CPU after creating one 50 partition topic in an otherwise empty
> > > cluster.
> > >
> > > On 21 September 2017 at 14:20, Ismael Juma <is...@juma.me.uk> wrote:
> > >
> > > > A couple of questions: how many partitions in the cluster and what
> are
> > > your
> > > > broker configs?
> > > >
> > > > On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
> > > > elliot.crosby-mccullough@freeagent.com> wrote:
> > > >
> > > > > Hello,
> > > > >
> > > > > We've been trying to debug an issue with our kafka cluster for
> > several
> > > > days
> > > > > now and we're close to out of options.
> > > > >
> > > > > We have 3 kafka brokers associated with 3 zookeeper nodes and 3
> > > registry
> > > > > nodes, plus a few streams clients and a ruby producer.
> > > > >
> > > > > Two of the three brokers are pinning a core and have been for days,
> > no
> > > > > amount of restarting, debugging, or clearing out of data seems to
> > help.
> > > > >
> > > > > We've got the logs at DEBUG level which shows a constant flow much
> > like
> > > > > this: https://gist.github.com/elliotcm/
> > e66a1ca838558664bab0c91549acb2
> > > 51
> > > > >
> > > > > As best as we can tell the brokers are up to date on replication
> and
> > > the
> > > > > leaders are well-balanced.  The cluster is receiving no traffic; no
> > > > > messages are being sent in and the consumers/streams are shut down.
> > > > >
> > > > > From our profiling of the JVM it looks like the CPU is mostly
> working
> > > in
> > > > > replication threads and SSL traffic (it's a secured cluster) but
> that
> > > > > shouldn't be treated as gospel.
> > > > >
> > > > > Any advice would be greatly appreciated.
> > > > >
> > > > > All the best,
> > > > > Elliot
> > > > >
> > > >
> > >
> >
>

Re: Idle cluster high CPU usage

Posted by Elliot Crosby-McCullough <el...@freeagent.com>.
Nothing, that value (that group of values) was at default when we started
the debugging.

On 21 September 2017 at 15:08, Ismael Juma <is...@juma.me.uk> wrote:

> Thanks. What happens if you reduce num.replica.fetchers?
>
> On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
> elliot.crosby-mccullough@freeagent.com> wrote:
>
> > 551 partitions, broker configs are:
> > https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
> >
> > We tweaked it a bit from standard recently but that was as part of the
> > debugging process.
> >
> > After some more experimentation I'm seeing the same behaviour at about
> half
> > the CPU after creating one 50 partition topic in an otherwise empty
> > cluster.
> >
> > On 21 September 2017 at 14:20, Ismael Juma <is...@juma.me.uk> wrote:
> >
> > > A couple of questions: how many partitions in the cluster and what are
> > your
> > > broker configs?
> > >
> > > On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
> > > elliot.crosby-mccullough@freeagent.com> wrote:
> > >
> > > > Hello,
> > > >
> > > > We've been trying to debug an issue with our kafka cluster for
> several
> > > days
> > > > now and we're close to out of options.
> > > >
> > > > We have 3 kafka brokers associated with 3 zookeeper nodes and 3
> > registry
> > > > nodes, plus a few streams clients and a ruby producer.
> > > >
> > > > Two of the three brokers are pinning a core and have been for days,
> no
> > > > amount of restarting, debugging, or clearing out of data seems to
> help.
> > > >
> > > > We've got the logs at DEBUG level which shows a constant flow much
> like
> > > > this: https://gist.github.com/elliotcm/
> e66a1ca838558664bab0c91549acb2
> > 51
> > > >
> > > > As best as we can tell the brokers are up to date on replication and
> > the
> > > > leaders are well-balanced.  The cluster is receiving no traffic; no
> > > > messages are being sent in and the consumers/streams are shut down.
> > > >
> > > > From our profiling of the JVM it looks like the CPU is mostly working
> > in
> > > > replication threads and SSL traffic (it's a secured cluster) but that
> > > > shouldn't be treated as gospel.
> > > >
> > > > Any advice would be greatly appreciated.
> > > >
> > > > All the best,
> > > > Elliot
> > > >
> > >
> >
>

Re: Idle cluster high CPU usage

Posted by Ismael Juma <is...@juma.me.uk>.
Thanks. What happens if you reduce num.replica.fetchers?

On Thu, Sep 21, 2017 at 3:02 PM, Elliot Crosby-McCullough <
elliot.crosby-mccullough@freeagent.com> wrote:

> 551 partitions, broker configs are:
> https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b
>
> We tweaked it a bit from standard recently but that was as part of the
> debugging process.
>
> After some more experimentation I'm seeing the same behaviour at about half
> the CPU after creating one 50 partition topic in an otherwise empty
> cluster.
>
> On 21 September 2017 at 14:20, Ismael Juma <is...@juma.me.uk> wrote:
>
> > A couple of questions: how many partitions in the cluster and what are
> your
> > broker configs?
> >
> > On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
> > elliot.crosby-mccullough@freeagent.com> wrote:
> >
> > > Hello,
> > >
> > > We've been trying to debug an issue with our kafka cluster for several
> > days
> > > now and we're close to out of options.
> > >
> > > We have 3 kafka brokers associated with 3 zookeeper nodes and 3
> registry
> > > nodes, plus a few streams clients and a ruby producer.
> > >
> > > Two of the three brokers are pinning a core and have been for days, no
> > > amount of restarting, debugging, or clearing out of data seems to help.
> > >
> > > We've got the logs at DEBUG level which shows a constant flow much like
> > > this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb2
> 51
> > >
> > > As best as we can tell the brokers are up to date on replication and
> the
> > > leaders are well-balanced.  The cluster is receiving no traffic; no
> > > messages are being sent in and the consumers/streams are shut down.
> > >
> > > From our profiling of the JVM it looks like the CPU is mostly working
> in
> > > replication threads and SSL traffic (it's a secured cluster) but that
> > > shouldn't be treated as gospel.
> > >
> > > Any advice would be greatly appreciated.
> > >
> > > All the best,
> > > Elliot
> > >
> >
>

Re: Idle cluster high CPU usage

Posted by Elliot Crosby-McCullough <el...@freeagent.com>.
551 partitions, broker configs are:
https://gist.github.com/elliotcm/3a35f66377c2ef4020d76508f49f106b

We tweaked it a bit from standard recently but that was as part of the
debugging process.

After some more experimentation I'm seeing the same behaviour at about half
the CPU after creating one 50 partition topic in an otherwise empty cluster.

On 21 September 2017 at 14:20, Ismael Juma <is...@juma.me.uk> wrote:

> A couple of questions: how many partitions in the cluster and what are your
> broker configs?
>
> On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
> elliot.crosby-mccullough@freeagent.com> wrote:
>
> > Hello,
> >
> > We've been trying to debug an issue with our kafka cluster for several
> days
> > now and we're close to out of options.
> >
> > We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
> > nodes, plus a few streams clients and a ruby producer.
> >
> > Two of the three brokers are pinning a core and have been for days, no
> > amount of restarting, debugging, or clearing out of data seems to help.
> >
> > We've got the logs at DEBUG level which shows a constant flow much like
> > this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb251
> >
> > As best as we can tell the brokers are up to date on replication and the
> > leaders are well-balanced.  The cluster is receiving no traffic; no
> > messages are being sent in and the consumers/streams are shut down.
> >
> > From our profiling of the JVM it looks like the CPU is mostly working in
> > replication threads and SSL traffic (it's a secured cluster) but that
> > shouldn't be treated as gospel.
> >
> > Any advice would be greatly appreciated.
> >
> > All the best,
> > Elliot
> >
>

Re: Idle cluster high CPU usage

Posted by Ismael Juma <is...@juma.me.uk>.
A couple of questions: how many partitions in the cluster and what are your
broker configs?

On Thu, Sep 21, 2017 at 1:58 PM, Elliot Crosby-McCullough <
elliot.crosby-mccullough@freeagent.com> wrote:

> Hello,
>
> We've been trying to debug an issue with our kafka cluster for several days
> now and we're close to out of options.
>
> We have 3 kafka brokers associated with 3 zookeeper nodes and 3 registry
> nodes, plus a few streams clients and a ruby producer.
>
> Two of the three brokers are pinning a core and have been for days, no
> amount of restarting, debugging, or clearing out of data seems to help.
>
> We've got the logs at DEBUG level which shows a constant flow much like
> this: https://gist.github.com/elliotcm/e66a1ca838558664bab0c91549acb251
>
> As best as we can tell the brokers are up to date on replication and the
> leaders are well-balanced.  The cluster is receiving no traffic; no
> messages are being sent in and the consumers/streams are shut down.
>
> From our profiling of the JVM it looks like the CPU is mostly working in
> replication threads and SSL traffic (it's a secured cluster) but that
> shouldn't be treated as gospel.
>
> Any advice would be greatly appreciated.
>
> All the best,
> Elliot
>