You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Sebastian Schelter (JIRA)" <ji...@apache.org> on 2014/11/21 14:15:34 UTC

[jira] [Created] (FLINK-1269) Easy way to "group count" dataset

Sebastian Schelter created FLINK-1269:
-----------------------------------------

             Summary: Easy way to "group count" dataset
                 Key: FLINK-1269
                 URL: https://issues.apache.org/jira/browse/FLINK-1269
             Project: Flink
          Issue Type: New Feature
          Components: Java API, Scala API
    Affects Versions: 0.7.0-incubating
            Reporter: Sebastian Schelter


Flink should offer an easy way to group datasets and compute the sizes of the resulting groups. This is one of the most essential operations in distributed processing, yet it is very hard to implement in Flink.

I assume it could be a show-stopper for people trying Flink, because at the moment, users have to perform the grouping and then write a groupReduce that counts the tuples in the group and extracts the group key at the same time.

Here is what I would semantically expect to happen:

{noformat}
def groupCount[T, K](data: DataSet[T], extractKey: (T) => K): DataSet[(K, Long)] = {

    data.groupBy { extractKey }
        .reduceGroup { group => countBy(extractKey, group) }
  }

  private[this] def countBy[T, K](extractKey: T => K,
                                  group: Iterator[T]): (K, Long) = {
    val key = extractKey(group.next())

    var count = 1L
    while (group.hasNext) {
      group.next()
      count += 1
    }

    key -> count
  }
{noformat}



  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)