You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@beam.apache.org by GitBox <gi...@apache.org> on 2021/06/14 07:37:08 UTC

[GitHub] [beam] TheNeuralBit commented on a change in pull request #15002: [BEAM-9547] Dataframe weighted sample.

TheNeuralBit commented on a change in pull request #15002:
URL: https://github.com/apache/beam/pull/15002#discussion_r650241149



##########
File path: sdks/python/apache_beam/dataframe/frames_test.py
##########
@@ -1607,6 +1607,30 @@ def test_sample_with_missing_weights(self):
     self.assertEqual(series_result.name, "GDP")
     self.assertEqual(set(series_result.index), set(["Nauru", "Iceland"]))
 
+  def test_sample_with_weights_distribution(self):
+    num_other_elements = 100
+    num_runs = 20
+
+    def sample_many_times(s, weights):
+      all = None
+      for _ in range(num_runs):
+        sampled = s.sample(weights=weights)
+        if all is None:
+          all = sampled
+        else:
+          all = all.append(sampled)
+      return all.sum()
+
+    result = self._run_test(
+        sample_many_times,
+        # The first element is 1, the rest are all 0.  This means that when
+        # we sum all the sampled elements (above), the result should be the
+        # number of times the first element was sampled.
+        pd.Series([1] + [0] * num_other_elements),
+        # Pick the first element about 20% of the time.
+        pd.Series([0.2] + [.8 / num_other_elements] * num_other_elements))
+
+    self.assertTrue(0 < result < num_runs / 2, result)

Review comment:
       Discussed this some offline, I think the probability of flake here is too high. result should follow a binomial distribution with n=20, p=0.2
   
   Then P(result=0) alone is 0.8**20 ~= 1e-2. I don't think this approach can totally eliminate the possibility of a flake, but we should probably get it at least below 1e-3. I'm not sure if we can tweak the constants enough to make that work while still getting a signal from this test, given that we also don't want to make `n` too high to keep from slowing it down.

##########
File path: sdks/python/apache_beam/dataframe/frames_test.py
##########
@@ -1607,6 +1608,29 @@ def test_sample_with_missing_weights(self):
     self.assertEqual(series_result.name, "GDP")
     self.assertEqual(set(series_result.index), set(["Nauru", "Iceland"]))
 
+  def test_sample_with_weights_distribution(self):
+    target_prob = 0.25
+    num_samples = 100
+    num_targets = 200
+    num_other_elements = 10000
+
+    target_weight = target_prob / num_targets
+    other_weight = (1 - target_prob) / num_other_elements
+    self.assertTrue(target_weight > other_weight * 10, "weights too close")
+
+    result = self._run_test(
+        lambda s,
+        weights: s.sample(n=num_samples, weights=weights).sum(),
+        # The first elements are 1, the rest are all 0.  This means that when
+        # we sum all the sampled elements (above), the result should be the
+        # number of times the first elements (aka targets) were sampled.
+        pd.Series([1] * num_targets + [0] * num_other_elements),
+        pd.Series([target_weight] * num_targets +
+                  [other_weight] * num_other_elements))
+
+    expected = num_samples * target_prob
+    self.assertTrue(expected / 3 < result < expected * 2, (expected, result))

Review comment:
       ```suggestion
       # Note the probabilistic nature of this test means that it will flake for ~1 in every 100,000 runs
       expected = num_samples * target_prob
       self.assertTrue(expected / 3 < result < expected * 2, (expected, result))
   ```
   
   Let's add a note about the (low) flake probability here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org