You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by viirya <gi...@git.apache.org> on 2017/08/19 13:41:52 UTC

[GitHub] spark pull request #18999: [SPARK-21779][PYTHON] Simpler DataFrame.sample AP...

Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/18999#discussion_r134092119
  
    --- Diff: python/pyspark/sql/dataframe.py ---
    @@ -659,19 +659,77 @@ def distinct(self):
             return DataFrame(self._jdf.distinct(), self.sql_ctx)
     
         @since(1.3)
    -    def sample(self, withReplacement, fraction, seed=None):
    +    def sample(self, withReplacement=None, fraction=None, seed=None):
             """Returns a sampled subset of this :class:`DataFrame`.
     
    +        :param withReplacement: Sample with replacement or not (default False).
    +        :param fraction: Fraction of rows to generate, range [0.0, 1.0].
    +        :param seed: Seed for sampling (default a random seed).
    +
             .. note:: This is not guaranteed to provide exactly the fraction specified of the total
                 count of the given :class:`DataFrame`.
     
    -        >>> df.sample(False, 0.5, 42).count()
    -        2
    -        """
    -        assert fraction >= 0.0, "Negative fraction value: %s" % fraction
    -        seed = seed if seed is not None else random.randint(0, sys.maxsize)
    -        rdd = self._jdf.sample(withReplacement, fraction, long(seed))
    -        return DataFrame(rdd, self.sql_ctx)
    +        .. note:: `fraction` is required and, `withReplacement` and `seed` are optional.
    +
    +        >>> df = spark.range(10)
    +        >>> df.sample(0.5, 3).count()
    +        4
    +        >>> df.sample(fraction=0.5, seed=3).count()
    +        4
    +        >>> df.sample(withReplacement=True, fraction=0.5, seed=3).count()
    +        1
    +        >>> df.sample(1.0).count()
    +        10
    +        >>> df.sample(fraction=1.0).count()
    +        10
    +        >>> df.sample(False, fraction=1.0).count()
    +        10
    +        >>> df.sample("a").count()
    +        Traceback (most recent call last):
    +            ...
    +        TypeError:...
    +        >>> df.sample(seed="abc").count()
    +        Traceback (most recent call last):
    +            ...
    +        TypeError:...
    +        """
    +
    +        # For the cases below:
    +        #   sample(True, 0.5 [, seed])
    +        #   sample(True, fraction=0.5 [, seed])
    +        #   sample(withReplacement=False, fraction=0.5 [, seed])
    +        is_withReplacement_set = \
    +            type(withReplacement) == bool and isinstance(fraction, float)
    +
    +        # For the case below:
    +        #   sample(faction=0.5 [, seed])
    +        is_withReplacement_omitted_kwargs = \
    +            withReplacement is None and isinstance(fraction, float)
    +
    +        # For the case below:
    +        #   sample(0.5 [, seed])
    +        is_withReplacement_omitted_args = isinstance(withReplacement, float)
    +
    +        if not (is_withReplacement_set
    +                or is_withReplacement_omitted_kwargs
    +                or is_withReplacement_omitted_args):
    +            argtypes = [
    +                str(type(arg)) for arg in [withReplacement, fraction, seed] if arg is not None]
    +            raise TypeError(
    +                "withReplacement (optional), fraction (required) and seed (optional)"
    +                " should be a bool, float and number; however, "
    +                "got %s." % ", ".join(argtypes))
    --- End diff --
    
    By this change, all three parameters can be `None` by default, `argtypes` seems to be an empty list here? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org