You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by kn...@apache.org on 2015/10/22 21:08:40 UTC

[1/2] storm git commit: Minor grammar fix to FAQ

Repository: storm
Updated Branches:
  refs/heads/master b0f3be0a5 -> d72e277ee


Minor grammar fix to FAQ

Missing subject.


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/8ae6b574
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/8ae6b574
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/8ae6b574

Branch: refs/heads/master
Commit: 8ae6b574a3fe41eae5afbc20ccff23a53bc11aee
Parents: f75fdde
Author: Edward Samson <ed...@samson.ph>
Authored: Wed Oct 21 09:54:31 2015 +0800
Committer: Edward Samson <ed...@samson.ph>
Committed: Wed Oct 21 09:54:31 2015 +0800

----------------------------------------------------------------------
 docs/documentation/FAQ.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/8ae6b574/docs/documentation/FAQ.md
----------------------------------------------------------------------
diff --git a/docs/documentation/FAQ.md b/docs/documentation/FAQ.md
index a69862e..90222d1 100644
--- a/docs/documentation/FAQ.md
+++ b/docs/documentation/FAQ.md
@@ -112,7 +112,7 @@ You can't change the overall batch size once generated, but you can change the n
 
 ### How do I aggregate events by time?
 
-If have records with an immutable timestamp, and you would like to count, average or otherwise aggregate them into discrete time buckets, Trident is an excellent and scalable solution.
+If you have records with an immutable timestamp, and you would like to count, average or otherwise aggregate them into discrete time buckets, Trident is an excellent and scalable solution.
 
 Write an `Each` function that turns the timestamp into a time bucket: if the bucket size was "by hour", then the timestamp `2013-08-08 12:34:56` would be mapped to the `2013-08-08 12:00:00` time bucket, and so would everything else in the twelve o'clock hour. Then group on that timebucket and use a grouped persistentAggregate. The persistentAggregate uses a local cacheMap backed by a data store. Groups with many records require very few reads from the data store, and use efficient bulk reads and writes; as long as your data feed is relatively prompt Trident will make very efficient use of memory and network. Even if a server drops off line for a day, then delivers that full day's worth of data in a rush, the old results will be calmly retrieved and updated -- and without interfering with calculating the current results.
 


[2/2] storm git commit: Merge branch 'faq-fix' of https://github.com/esamson/storm

Posted by kn...@apache.org.
Merge branch 'faq-fix' of https://github.com/esamson/storm


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/d72e277e
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/d72e277e
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/d72e277e

Branch: refs/heads/master
Commit: d72e277eece0155cbb5e306676f7c025cdaf9e87
Parents: b0f3be0 8ae6b57
Author: Kyle Nusbaum <Ky...@gmail.com>
Authored: Thu Oct 22 14:06:51 2015 -0500
Committer: Kyle Nusbaum <Ky...@gmail.com>
Committed: Thu Oct 22 14:06:51 2015 -0500

----------------------------------------------------------------------
 docs/documentation/FAQ.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------