You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@phoenix.apache.org by "James Taylor (JIRA)" <ji...@apache.org> on 2017/05/02 19:18:04 UTC

[jira] [Commented] (PHOENIX-3823) Force cache update on MetaDataEntityNotFoundException

    [ https://issues.apache.org/jira/browse/PHOENIX-3823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15993548#comment-15993548 ] 

James Taylor commented on PHOENIX-3823:
---------------------------------------

This JIRA can be broken down into the following parts:
- At the top level, in PhoenixStatement.executeMutation() and PhoenixStatement.executeQuery(), catch MetaDataEntityNotFoundException, update the cache through MetaDataClient.updateCache() call, and retry once.
- To test this, you'll need to have different Phoenix connections from different ConnectionQueryServices (otherwise the client-side cache will be synchronized between the different connections). See here[1] for how to do that:
{code}
Connection conn1 = DriverManager.getConnection(“jdbc:phoenix:my_server:longRunning”, longRunningProps);
Connection conn2 = DriverManager.getConnection("jdbc:phoenix:my_server:shortRunning", shortRunningProps);
{code}
- In your unit test, you'll need to force Phoenix to use the real PhoenixDriver instead of the test driver by setting {{QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB}} to {{DEFAULT_EXTRA_JDBC_ARGUMENTS}} as is done in QueryTimeoutIT.
- Start by writing a unit test setup like this and confirm that when an UPDATE_CACHE_FREQUENCY is setup for tables, that {{conn1}} adding a column to a table is not seen when referenced by a query in {{conn2}}. Then add the catch/retry logic mentioned above to handle the add table/column case.

[1] https://phoenix.apache.org/index.html#Connection

> Force cache update on MetaDataEntityNotFoundException 
> ------------------------------------------------------
>
>                 Key: PHOENIX-3823
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3823
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: James Taylor
>            Assignee: Maddineni Sukumar
>
> When UPDATE_CACHE_FREQUENCY is used, clients will cache metadata for a period of time which may cause the schema being used to become stale. If another client adds a column or a new table or view, other clients won't see it. As a result, the client will get a MetaDataEntityNotFoundException. Instead of bubbling this up, we should retry after forcing a cache update on the tables involved in the query.
> The above works well for references to entities that don't yet exist. However, we cannot detect when some entities are referred to which no longer exists until the cache expires. An exception is if a physical table is dropped which would be detected immediately, however we would allow queries and updates to columns which have been dropped until the cache entry expires (which seems like a reasonable tradeoff IMHO. In addition, we won't start using indexes on tables until the cache expires.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)