You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by hequn8128 <gi...@git.apache.org> on 2017/07/31 04:21:29 UTC

[GitHub] flink pull request #4421: [FLINK-7298] [table] Records can be cleared all at...

GitHub user hequn8128 opened a pull request:

    https://github.com/apache/flink/pull/4421

    [FLINK-7298] [table] Records can be cleared all at once when all data in state is invalid

    In `ProcTimeWindowInnerJoin`.`expireOutTimeRow`, we need not to remove records one by one from state when there is no valid records. Instead, we can clear them all at once.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/hequn8128/flink 7298

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/4421.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4421
    
----
commit 0d2641c13fa5e9aafca16c0fbdc87fab4abc05ec
Author: 军长 <he...@alibaba-inc.com>
Date:   2017-07-31T03:14:06Z

    [FLINK-7298] [table] Records can be cleared when all data in state is invalid

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink issue #4421: [FLINK-7298] [table] Records can be cleared all at once w...

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/4421
  
    Hi @hequn8128, I think the change is good.
    
    I don't think we need to remove `!validTimestamp` though. The clean-up is a best effort cleanup and will register another clean-up timer if there are records left in the state. So, we might not remove all expired records in one run but also do not have to iterate over all timestamps. This is an optimization for RocksDB state which provides the keys in sorted order, so in this case all expired records are removed without going over all keys of the MapState.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink issue #4421: [FLINK-7298] [table] Records can be cleared all at once w...

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/4421
  
    Btw, please fill out the PR template the next time you open a pull request. See for PR #4444 as an example.
    Thanks, Fabian


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink issue #4421: [FLINK-7298] [table] Records can be cleared all at once w...

Posted by hequn8128 <gi...@git.apache.org>.
Github user hequn8128 commented on the issue:

    https://github.com/apache/flink/pull/4421
  
    Also, I think we need to remove `!validTimestamp` from `while (keyIter.hasNext && !validTimestamp) {` . Because, new records may be inserted at the head of rowMapState, in this case expired data will always remain in the rowMapState. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink pull request #4421: [FLINK-7298] [table] Records can be cleared all at...

Posted by asfgit <gi...@git.apache.org>.
Github user asfgit closed the pull request at:

    https://github.com/apache/flink/pull/4421


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink issue #4421: [FLINK-7298] [table] Records can be cleared all at once w...

Posted by hequn8128 <gi...@git.apache.org>.
Github user hequn8128 commented on the issue:

    https://github.com/apache/flink/pull/4421
  
    OK, so RocksDB state provides the keys in sorted order. Then we needn't remove `!validTimestamp`. I will fill out the PR template next time. 
    Thanks, hequn


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

[GitHub] flink issue #4421: [FLINK-7298] [table] Records can be cleared all at once w...

Posted by fhueske <gi...@git.apache.org>.
Github user fhueske commented on the issue:

    https://github.com/apache/flink/pull/4421
  
    merging


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---