You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by Jody Landreneau <jo...@gmail.com> on 2014/08/15 20:56:36 UTC

client cache

hello phoenix devs,

Let me explain an issue I would like to solve. We have multiple phoenix
clients running, which could be on several physical machines(diff vms)
which act as storage/retreival endpoints. If I change the schema of a
table, by adding or removing a field, I get errors in the clients that
didn't issue the alter. This is due to an internal client cache that is not
refreshed. I note that the connections get their cache from this shared
client cache so creating/closing connections does not help.

I would like to add a timed expiration cache also limited by size to
address this issue.  I see that there is a guava cache for server side and
think that doing something similar on the client side makes sense. It could
make things much simpler than having to deal with a pruner and other code.
I was wondering if the community would accept an approach like this. Also,
we could reduce all the cloning of the cache, potentially just sharing one
for connections that belong to a client.  I see that there is some work to
try and manage the capacity of the number of bytes that the cache has.
Would it be reasonable to just make the capacity based off the number of
tables the cache holds instead of byte detail? It seems that the objects
should be fairly light weight and if you go to an approach of sharing the
same cache across connection then it should use even less resources.

I would like to know if there are some reasons for not taking this approach?

thanks in advance --

Re: client cache

Posted by Jody Landreneau <jo...@gmail.com>.
I have created this issue https://issues.apache.org/jira/browse/PHOENIX-1181
with the details.


On Fri, Aug 15, 2014 at 9:33 PM, Jody Landreneau <jo...@gmail.com>
wrote:

> Thanks James - Will do on Mon.
> On Aug 15, 2014 8:26 PM, "James Taylor" <ja...@apache.org> wrote:
>
>> Hi Jody,
>> Thanks for reporting this. These are bugs, as the client should detect
>> and retry automatically in these cases when necessary. What version of
>> Phoenix are you using? Would you mind giving it a shot in 3.1/4.1.
>> We'll have an RC out on Monday at the latest.
>> Thanks,
>> James
>>
>> On Fri, Aug 15, 2014 at 11:56 AM, Jody Landreneau
>> <jo...@gmail.com> wrote:
>> > hello phoenix devs,
>> >
>> > Let me explain an issue I would like to solve. We have multiple phoenix
>> > clients running, which could be on several physical machines(diff vms)
>> > which act as storage/retreival endpoints. If I change the schema of a
>> > table, by adding or removing a field, I get errors in the clients that
>> > didn't issue the alter. This is due to an internal client cache that is
>> not
>> > refreshed. I note that the connections get their cache from this shared
>> > client cache so creating/closing connections does not help.
>> >
>> > I would like to add a timed expiration cache also limited by size to
>> > address this issue.  I see that there is a guava cache for server side
>> and
>> > think that doing something similar on the client side makes sense. It
>> could
>> > make things much simpler than having to deal with a pruner and other
>> code.
>> > I was wondering if the community would accept an approach like this.
>> Also,
>> > we could reduce all the cloning of the cache, potentially just sharing
>> one
>> > for connections that belong to a client.  I see that there is some work
>> to
>> > try and manage the capacity of the number of bytes that the cache has.
>> > Would it be reasonable to just make the capacity based off the number of
>> > tables the cache holds instead of byte detail? It seems that the objects
>> > should be fairly light weight and if you go to an approach of sharing
>> the
>> > same cache across connection then it should use even less resources.
>> >
>> > I would like to know if there are some reasons for not taking this
>> approach?
>> >
>> > thanks in advance --
>>
>

Re: client cache

Posted by Jody Landreneau <jo...@gmail.com>.
Thanks James - Will do on Mon.
On Aug 15, 2014 8:26 PM, "James Taylor" <ja...@apache.org> wrote:

> Hi Jody,
> Thanks for reporting this. These are bugs, as the client should detect
> and retry automatically in these cases when necessary. What version of
> Phoenix are you using? Would you mind giving it a shot in 3.1/4.1.
> We'll have an RC out on Monday at the latest.
> Thanks,
> James
>
> On Fri, Aug 15, 2014 at 11:56 AM, Jody Landreneau
> <jo...@gmail.com> wrote:
> > hello phoenix devs,
> >
> > Let me explain an issue I would like to solve. We have multiple phoenix
> > clients running, which could be on several physical machines(diff vms)
> > which act as storage/retreival endpoints. If I change the schema of a
> > table, by adding or removing a field, I get errors in the clients that
> > didn't issue the alter. This is due to an internal client cache that is
> not
> > refreshed. I note that the connections get their cache from this shared
> > client cache so creating/closing connections does not help.
> >
> > I would like to add a timed expiration cache also limited by size to
> > address this issue.  I see that there is a guava cache for server side
> and
> > think that doing something similar on the client side makes sense. It
> could
> > make things much simpler than having to deal with a pruner and other
> code.
> > I was wondering if the community would accept an approach like this.
> Also,
> > we could reduce all the cloning of the cache, potentially just sharing
> one
> > for connections that belong to a client.  I see that there is some work
> to
> > try and manage the capacity of the number of bytes that the cache has.
> > Would it be reasonable to just make the capacity based off the number of
> > tables the cache holds instead of byte detail? It seems that the objects
> > should be fairly light weight and if you go to an approach of sharing the
> > same cache across connection then it should use even less resources.
> >
> > I would like to know if there are some reasons for not taking this
> approach?
> >
> > thanks in advance --
>

Re: client cache

Posted by James Taylor <ja...@apache.org>.
Hi Jody,
Thanks for reporting this. These are bugs, as the client should detect
and retry automatically in these cases when necessary. What version of
Phoenix are you using? Would you mind giving it a shot in 3.1/4.1.
We'll have an RC out on Monday at the latest.
Thanks,
James

On Fri, Aug 15, 2014 at 11:56 AM, Jody Landreneau
<jo...@gmail.com> wrote:
> hello phoenix devs,
>
> Let me explain an issue I would like to solve. We have multiple phoenix
> clients running, which could be on several physical machines(diff vms)
> which act as storage/retreival endpoints. If I change the schema of a
> table, by adding or removing a field, I get errors in the clients that
> didn't issue the alter. This is due to an internal client cache that is not
> refreshed. I note that the connections get their cache from this shared
> client cache so creating/closing connections does not help.
>
> I would like to add a timed expiration cache also limited by size to
> address this issue.  I see that there is a guava cache for server side and
> think that doing something similar on the client side makes sense. It could
> make things much simpler than having to deal with a pruner and other code.
> I was wondering if the community would accept an approach like this. Also,
> we could reduce all the cloning of the cache, potentially just sharing one
> for connections that belong to a client.  I see that there is some work to
> try and manage the capacity of the number of bytes that the cache has.
> Would it be reasonable to just make the capacity based off the number of
> tables the cache holds instead of byte detail? It seems that the objects
> should be fairly light weight and if you go to an approach of sharing the
> same cache across connection then it should use even less resources.
>
> I would like to know if there are some reasons for not taking this approach?
>
> thanks in advance --