You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@allura.apache.org by Dave Brondsema <da...@brondsema.net> on 2015/10/20 20:41:24 UTC

[allura:tickets] #8006 Large timeline performance issue in activity stream



---

** [tickets:#8006] Large timeline performance issue in activity stream**

**Status:** open
**Milestone:** unreleased
**Labels:** performance activitystreams 
**Created:** Tue Oct 20, 2015 06:41 PM UTC by Dave Brondsema
**Last Updated:** Tue Oct 20, 2015 06:41 PM UTC
**Owner:** nobody


When an activity happens on a project and the `create_timelines` task is run, that executes ActivityStream's `Aggregator.create_timeline`.  In a case where there are no new records for some reason, then it calls `get_timeline`.  This can be a problem because the pre-computed "timeline" there could potentially be thousands or millions of records.  This takes a while and can take up a ton of memory which doesn't get reclaimed after the task is done.

We should evaulate if that behavior is correct.  If it is needed, we should pass a `limit` parameter in.

Also perhaps see if we can figure out why these records aren't being garbage collected.


---

Sent from forge-allura.apache.org because dev@allura.apache.org is subscribed to https://forge-allura.apache.org/p/allura/tickets/

To unsubscribe from further messages, a project admin can change settings at https://forge-allura.apache.org/p/allura/admin/tickets/options.  Or, if this is a mailing list, you can unsubscribe from the mailing list.

[allura:tickets] #8006 Large timeline performance issue in activity stream

Posted by Dave Brondsema <da...@brondsema.net>.
- **status**: review --> closed



---

** [tickets:#8006] Large timeline performance issue in activity stream**

**Status:** closed
**Milestone:** unreleased
**Labels:** performance activitystreams 
**Created:** Tue Oct 20, 2015 06:41 PM UTC by Dave Brondsema
**Last Updated:** Tue May 03, 2016 08:14 PM UTC
**Owner:** nobody


When an activity happens on a project and the `create_timelines` task is run, that executes ActivityStream's `Aggregator.create_timeline`.  In a case where there are no new records for some reason, then it calls `get_timeline`.  This can be a problem because the pre-computed "timeline" there could potentially be thousands or millions of records.  This takes a while and can take up a ton of memory which doesn't get reclaimed after the task is done.

We should evaulate if that behavior is correct.  If it is needed, we should pass a `limit` parameter in.

Also perhaps see if we can figure out why these records aren't being garbage collected.


---

Sent from forge-allura.apache.org because dev@allura.apache.org is subscribed to https://forge-allura.apache.org/p/allura/tickets/

To unsubscribe from further messages, a project admin can change settings at https://forge-allura.apache.org/p/allura/admin/tickets/options.  Or, if this is a mailing list, you can unsubscribe from the mailing list.

[allura:tickets] #8006 Large timeline performance issue in activity stream

Posted by Dave Brondsema <da...@brondsema.net>.
- **status**: open --> review
- **Comment**:

Query removal in https://sourceforge.net/p/activitystream/code/ci/db/8006/~/tree/

Memory usage seems like it should be managed properly, at least from the Ming perspective.  We use the ming storage option for activitystream, and so the ming session gets registered globally, and all its ORM objects get cleaned up via `MingMiddleware` at the end of each request/task.  



---

** [tickets:#8006] Large timeline performance issue in activity stream**

**Status:** review
**Milestone:** unreleased
**Labels:** performance activitystreams 
**Created:** Tue Oct 20, 2015 06:41 PM UTC by Dave Brondsema
**Last Updated:** Tue Oct 20, 2015 06:41 PM UTC
**Owner:** nobody


When an activity happens on a project and the `create_timelines` task is run, that executes ActivityStream's `Aggregator.create_timeline`.  In a case where there are no new records for some reason, then it calls `get_timeline`.  This can be a problem because the pre-computed "timeline" there could potentially be thousands or millions of records.  This takes a while and can take up a ton of memory which doesn't get reclaimed after the task is done.

We should evaulate if that behavior is correct.  If it is needed, we should pass a `limit` parameter in.

Also perhaps see if we can figure out why these records aren't being garbage collected.


---

Sent from forge-allura.apache.org because dev@allura.apache.org is subscribed to https://forge-allura.apache.org/p/allura/tickets/

To unsubscribe from further messages, a project admin can change settings at https://forge-allura.apache.org/p/allura/admin/tickets/options.  Or, if this is a mailing list, you can unsubscribe from the mailing list.