You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@pulsar.apache.org by Lin Zhao <li...@streamnative.io.INVALID> on 2022/09/09 16:26:03 UTC

[DISSCUSS] Code coverage report for PRs to master

Hi,

I'd like to start a discussion about turning on CodeCov report for PRs to
master to show the PR's impact on unit test coverage. Previous discussion
on https://github.com/apache/pulsar/pull/17382.

Proposal:
1. Unit test coverage will be added to the CI pipeline and reported to the
PR page.
Sample report:

@@            Coverage Diff            @@##             master
#17382   +/-   ##
=========================================
  Coverage          ?   32.10%
  Complexity        ?     4141
=========================================
  Files             ?      387
  Lines             ?    42806
  Branches          ?     4420
=========================================
  Hits              ?    13741
  Misses            ?    27032
  Partials          ?     2033

2. The report will serve as additional input for the reviewers. The
requester is expected to explain any significant negative impact.

Why?


   1. The existing code coverage for Pulsar is very poor at just above
50%. Reasonable expectation for libraries is 90% and 70 or 80% for the
broker. We are at 60% and 50%.
   2. The coverage report would prevent coverage from getting worse
and bring the conversation on the table.
   3. Even though code coverage has its limitation in assessing test
coverage, it's the only viable tool.



Thoughts?

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by Elliot West <el...@streamnative.io.INVALID>.
Hello devs,

I think that at a minimum we should be able to detect coverage regression
and generate a visible warning on a PR. Specific statistics in the PR
conversation may be less useful, and potentially noisy. However, I'd
suggest also including some developer documentation that describes how to
generate a coverage report locally to assist developers who want/need to
increase the coverage of their code submission. Finally, when we wear our
reviewing hats, we need to expect a set of well-constructed and maintained
tests with each PR. This also means checking for determinism, a good
balance of unit vs integration tests, test performance, and
appropriate refactoring.

Elliot.

On Sun, 11 Sept 2022 at 02:42, PengHui Li <pe...@apache.org> wrote:

> Hi Lin,
>
> Thanks for starting this discussion.
>
> First, I support having code coverage repo for the PR which will provide
> the initial impressions of the test coverage to reviewers. Although the
> code
> coverage report can produce distorted results, the report is not the
> supreme indicator, just an auxiliary tool.
>
> Also, it should be a good tip for contributors and reviewers, need to pay
> attention to test coverage during the coding and before merging the PR.
>
> Second, About CI congestion, IMO we should make the test running more
> efficient
> and stable. We are working on a list to show all the tests which running
> over 5 mins
> Many unreasonable tests consume a lot of resources(start/stop the service
> for each test)
> Instead of preventing new CIs, Of course we need to pay attention to the
> resource consumption of the new upcoming CI. For the flaky test, I believe
> it is getting better.
>
> For https://github.com/apache/pulsar/pull/17382, I can't see the coverage
> diff.
> It would be a great indicator for contributors and reviewers. I think
> Lishen is still
> working on this part? When I check the CI to see the resource consumption
> of the code coverage check. But it looks like it has been skipped, I'm not
> sure why.
> Maybe Lishen can provide more details?
>
> Regards,
> Penghui
>
> On Sun, Sep 11, 2022 at 1:39 AM Lin Zhao <lin.zhao@streamnative.io
> .invalid>
> wrote:
>
> > Hi Xiangying,
> >
> > I totally agree that when new feature or major optimizations are
> submitted,
> > it should be in the owner's interest to care about code coverage for
> their
> > new feature. The test cases are protection of their code from unintended
> > side effects from other people's work.
> >
> > In the case the PR is a bug fix, we should attempt to add a unit test to
> > cover this bug, which ideally increases the coverage. At the minimum, it
> > should not add complex uncovered code and significantly decrease the
> > coverage.
> >
> > I think both the button to run code coverage, and the report in the PR
> has
> > value. I support both.
> >
> > Lin
> >
> > On Fri, Sep 9, 2022 at 6:44 PM Xiangying Meng <xi...@apache.org>
> > wrote:
> >
> > > Hi,
> > > IMO, there are two scenarios when users need test coverage:
> > > 1. When new features or major optimizations are submitted, the owners
> of
> > > the PR may care about the code coverage of these new features.
> > > 2. When we need to optimize the original code to increase the overall
> > code
> > > test coverage of Pulsar.
> > > And there are two scenarios when users don't need test coverage:
> > > 1. The PR only is a Bug Fix
> > > 2. There is a small local optimization in this PR
> > > In fact, most PRs for Pulsar right now are bug fixes or small
> > > optimizations. The PR owner does not need to consider test coverage.
> > >
> > > So, my suggestion is that we can provide a button to run code coverage
> > and
> > > download coverage reports.
> > >  And put this button on the CI detail page, so that people who really
> > need
> > > it can get the code test coverage without affecting the running of most
> > > other CIs.
> > >
> > > Sincerely,
> > > Xiangying
> > >
> > > On Sat, Sep 10, 2022 at 5:13 AM asn <ya...@gmail.com> wrote:
> > >
> > > > Hi,
> > > >
> > > > IMO, code coverage is not just a number, 50% or 70% makes no sense
> > except
> > > > to let us feel that we have more confidence. So what is really
> > > important? I
> > > > think it is the *coverage* itself, we need to see where we need to
> > write
> > > > tests in the future based the result because only if we have this
> data,
> > > we
> > > > could know where to cover. Furthermore, we can force every pr to
> write
> > > high
> > > > quality test.
> > > >
> > > > tison <wa...@gmail.com> 于2022年9月10日周六 01:37写道:
> > > >
> > > > > > Please provide data point its impact to CI stability.
> > > > >
> > > > > If you take a look at https://github.com/apache/pulsar/pull/17382,
> > it
> > > > > changes pulsar-ci.yml to run commands for generating the codecov
> > > report.
> > > > > I'm unsure of the impact of a new step, it may not affect too much
> > > since
> > > > > there's already a runner take the whole job.
> > > > >
> > > > > For the rest, I wrote:
> > > > >
> > > > > > I read the output in your proposal as simply:
> > > > > >> The report will serve as additional input for the reviewers. The
> > > > > requester
> > > > > is expected to explain any significant negative impact.
> > > > > > This works for me as a nonmandatory helper.
> > > > >
> > > > > Feelings can be biased. No objection to the action.
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > >
> > > > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:
> > > > >
> > > > > > Hi Tison,
> > > > > >
> > > > > > Thanks for your input. I agree that the community should focus on
> > the
> > > > > > priorities regarding CI that you mentioned. At the same time, I'm
> > > > having
> > > > > a
> > > > > > hard time understanding the negative impact that you suggested
> from
> > > > this
> > > > > > change.
> > > > > >
> > > > > > 1. To my knowledge code coverage calculation adds little overhead
> > to
> > > > ci.
> > > > > > Please provide data point its impact to CI stability.
> > > > > > 2. Code coverage isn't completely accurate and yet it's valuable
> > data
> > > > > > point. The goal is not to reach 100%, but to prevent coverage
> > > becoming
> > > > > > worse. To me this change has only benefits and not drawbacks.
> > > > > > 3. I don't agree with your sentiments about code coverage.
> There's
> > > > > material
> > > > > > difference between 50% and 70% and we are at the low end of 50%.
> > *We
> > > > will
> > > > > > not add a commit gate to coverage change. *The report serves as
> an
> > > > > > additional data point for the approvers.
> > > > > >
> > > > > > Lin
> > > > > >
> > > > > >
> > > > > > On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com>
> wrote:
> > > > > >
> > > > > > > Hi Lin,
> > > > > > >
> > > > > > > Thanks for starting this discussion!
> > > > > > >
> > > > > > > As long as it takes a different resource set from current CI
> > tasks,
> > > > I'm
> > > > > > +0
> > > > > > > as commented on PR-17382. I hardly read the report.
> > > > > > >
> > > > > > > I read the output in your proposal as simply:
> > > > > > >
> > > > > > > > The report will serve as additional input for the reviewers.
> > The
> > > > > > > requester
> > > > > > > is expected to explain any significant negative impact.
> > > > > > >
> > > > > > > This works for me as a nonmandatory helper.
> > > > > > >
> > > > > > > Thoughts on the baseline:
> > > > > > >
> > > > > > > 1. I don't like Code Coverage because from my experience
> they're
> > > > > _never_
> > > > > > > precise. And chasing 100% test coverage is unnecessary.
> > > > > > > 2. Although, if test coverage is below 50%, it _is_ a signal to
> > me
> > > > that
> > > > > > we
> > > > > > > suffer from too few tests.
> > > > > > > 3. Over 50% or 70%, test coverage looks the same to me.
> > > > > > >
> > > > > > > And our tests suffer from:
> > > > > > >
> > > > > > > 1. Heavyweight. We're now working on resolving CI congestion.
> > Many
> > > > > "unit
> > > > > > > tests" are actually integration tests with mock Pulsar Service
> > etc.
> > > > > which
> > > > > > > setup/teardown takes over ten (tens of) seconds.
> > > > > > > 2. Brittle. Too many mocks and radical powermocks make tests
> > > brittle.
> > > > > > > Mockito suggests never mock classes you don't own, but it seems
> > > > nobody
> > > > > > > cares. We mock over ES and ZK clients and so on.
> > > > > > >
> > > > > > > I don't want to spread the discussion over tests. But when it
> > comes
> > > > to
> > > > > > > improving tests, I want to share the most emergent items on it.
> > > > > > >
> > > > > > > Best,
> > > > > > > tison.
> > > > > > >
> > > > > > >
> > > > > > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六
> > 00:26写道:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > I'd like to start a discussion about turning on CodeCov
> report
> > > for
> > > > > PRs
> > > > > > to
> > > > > > > > master to show the PR's impact on unit test coverage.
> Previous
> > > > > > discussion
> > > > > > > > on https://github.com/apache/pulsar/pull/17382.
> > > > > > > >
> > > > > > > > Proposal:
> > > > > > > > 1. Unit test coverage will be added to the CI pipeline and
> > > reported
> > > > > to
> > > > > > > the
> > > > > > > > PR page.
> > > > > > > > Sample report:
> > > > > > > >
> > > > > > > > @@            Coverage Diff            @@##
>  master
> > > > > > > > #17382   +/-   ##
> > > > > > > > =========================================
> > > > > > > >   Coverage          ?   32.10%
> > > > > > > >   Complexity        ?     4141
> > > > > > > > =========================================
> > > > > > > >   Files             ?      387
> > > > > > > >   Lines             ?    42806
> > > > > > > >   Branches          ?     4420
> > > > > > > > =========================================
> > > > > > > >   Hits              ?    13741
> > > > > > > >   Misses            ?    27032
> > > > > > > >   Partials          ?     2033
> > > > > > > >
> > > > > > > > 2. The report will serve as additional input for the
> reviewers.
> > > The
> > > > > > > > requester is expected to explain any significant negative
> > impact.
> > > > > > > >
> > > > > > > > Why?
> > > > > > > >
> > > > > > > >
> > > > > > > >    1. The existing code coverage for Pulsar is very poor at
> > just
> > > > > above
> > > > > > > > 50%. Reasonable expectation for libraries is 90% and 70 or
> 80%
> > > for
> > > > > the
> > > > > > > > broker. We are at 60% and 50%.
> > > > > > > >    2. The coverage report would prevent coverage from getting
> > > worse
> > > > > > > > and bring the conversation on the table.
> > > > > > > >    3. Even though code coverage has its limitation in
> assessing
> > > > test
> > > > > > > > coverage, it's the only viable tool.
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Thoughts?
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > > --
> > > > yaalsn
> > > >
> > >
> >
>


-- 

Elliot West

Senior Platform Engineer

elliot.west@streamnative.io

streamnative.io

<https://github.com/streamnative>
<https://www.linkedin.com/company/streamnative>
<https://twitter.com/streamnativeio>

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by PengHui Li <pe...@apache.org>.
Hi Lin,

Thanks for starting this discussion.

First, I support having code coverage repo for the PR which will provide
the initial impressions of the test coverage to reviewers. Although the
code
coverage report can produce distorted results, the report is not the
supreme indicator, just an auxiliary tool.

Also, it should be a good tip for contributors and reviewers, need to pay
attention to test coverage during the coding and before merging the PR.

Second, About CI congestion, IMO we should make the test running more
efficient
and stable. We are working on a list to show all the tests which running
over 5 mins
Many unreasonable tests consume a lot of resources(start/stop the service
for each test)
Instead of preventing new CIs, Of course we need to pay attention to the
resource consumption of the new upcoming CI. For the flaky test, I believe
it is getting better.

For https://github.com/apache/pulsar/pull/17382, I can't see the coverage
diff.
It would be a great indicator for contributors and reviewers. I think
Lishen is still
working on this part? When I check the CI to see the resource consumption
of the code coverage check. But it looks like it has been skipped, I'm not
sure why.
Maybe Lishen can provide more details?

Regards,
Penghui

On Sun, Sep 11, 2022 at 1:39 AM Lin Zhao <li...@streamnative.io.invalid>
wrote:

> Hi Xiangying,
>
> I totally agree that when new feature or major optimizations are submitted,
> it should be in the owner's interest to care about code coverage for their
> new feature. The test cases are protection of their code from unintended
> side effects from other people's work.
>
> In the case the PR is a bug fix, we should attempt to add a unit test to
> cover this bug, which ideally increases the coverage. At the minimum, it
> should not add complex uncovered code and significantly decrease the
> coverage.
>
> I think both the button to run code coverage, and the report in the PR has
> value. I support both.
>
> Lin
>
> On Fri, Sep 9, 2022 at 6:44 PM Xiangying Meng <xi...@apache.org>
> wrote:
>
> > Hi,
> > IMO, there are two scenarios when users need test coverage:
> > 1. When new features or major optimizations are submitted, the owners of
> > the PR may care about the code coverage of these new features.
> > 2. When we need to optimize the original code to increase the overall
> code
> > test coverage of Pulsar.
> > And there are two scenarios when users don't need test coverage:
> > 1. The PR only is a Bug Fix
> > 2. There is a small local optimization in this PR
> > In fact, most PRs for Pulsar right now are bug fixes or small
> > optimizations. The PR owner does not need to consider test coverage.
> >
> > So, my suggestion is that we can provide a button to run code coverage
> and
> > download coverage reports.
> >  And put this button on the CI detail page, so that people who really
> need
> > it can get the code test coverage without affecting the running of most
> > other CIs.
> >
> > Sincerely,
> > Xiangying
> >
> > On Sat, Sep 10, 2022 at 5:13 AM asn <ya...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > IMO, code coverage is not just a number, 50% or 70% makes no sense
> except
> > > to let us feel that we have more confidence. So what is really
> > important? I
> > > think it is the *coverage* itself, we need to see where we need to
> write
> > > tests in the future based the result because only if we have this data,
> > we
> > > could know where to cover. Furthermore, we can force every pr to write
> > high
> > > quality test.
> > >
> > > tison <wa...@gmail.com> 于2022年9月10日周六 01:37写道:
> > >
> > > > > Please provide data point its impact to CI stability.
> > > >
> > > > If you take a look at https://github.com/apache/pulsar/pull/17382,
> it
> > > > changes pulsar-ci.yml to run commands for generating the codecov
> > report.
> > > > I'm unsure of the impact of a new step, it may not affect too much
> > since
> > > > there's already a runner take the whole job.
> > > >
> > > > For the rest, I wrote:
> > > >
> > > > > I read the output in your proposal as simply:
> > > > >> The report will serve as additional input for the reviewers. The
> > > > requester
> > > > is expected to explain any significant negative impact.
> > > > > This works for me as a nonmandatory helper.
> > > >
> > > > Feelings can be biased. No objection to the action.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:
> > > >
> > > > > Hi Tison,
> > > > >
> > > > > Thanks for your input. I agree that the community should focus on
> the
> > > > > priorities regarding CI that you mentioned. At the same time, I'm
> > > having
> > > > a
> > > > > hard time understanding the negative impact that you suggested from
> > > this
> > > > > change.
> > > > >
> > > > > 1. To my knowledge code coverage calculation adds little overhead
> to
> > > ci.
> > > > > Please provide data point its impact to CI stability.
> > > > > 2. Code coverage isn't completely accurate and yet it's valuable
> data
> > > > > point. The goal is not to reach 100%, but to prevent coverage
> > becoming
> > > > > worse. To me this change has only benefits and not drawbacks.
> > > > > 3. I don't agree with your sentiments about code coverage. There's
> > > > material
> > > > > difference between 50% and 70% and we are at the low end of 50%.
> *We
> > > will
> > > > > not add a commit gate to coverage change. *The report serves as an
> > > > > additional data point for the approvers.
> > > > >
> > > > > Lin
> > > > >
> > > > >
> > > > > On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com> wrote:
> > > > >
> > > > > > Hi Lin,
> > > > > >
> > > > > > Thanks for starting this discussion!
> > > > > >
> > > > > > As long as it takes a different resource set from current CI
> tasks,
> > > I'm
> > > > > +0
> > > > > > as commented on PR-17382. I hardly read the report.
> > > > > >
> > > > > > I read the output in your proposal as simply:
> > > > > >
> > > > > > > The report will serve as additional input for the reviewers.
> The
> > > > > > requester
> > > > > > is expected to explain any significant negative impact.
> > > > > >
> > > > > > This works for me as a nonmandatory helper.
> > > > > >
> > > > > > Thoughts on the baseline:
> > > > > >
> > > > > > 1. I don't like Code Coverage because from my experience they're
> > > > _never_
> > > > > > precise. And chasing 100% test coverage is unnecessary.
> > > > > > 2. Although, if test coverage is below 50%, it _is_ a signal to
> me
> > > that
> > > > > we
> > > > > > suffer from too few tests.
> > > > > > 3. Over 50% or 70%, test coverage looks the same to me.
> > > > > >
> > > > > > And our tests suffer from:
> > > > > >
> > > > > > 1. Heavyweight. We're now working on resolving CI congestion.
> Many
> > > > "unit
> > > > > > tests" are actually integration tests with mock Pulsar Service
> etc.
> > > > which
> > > > > > setup/teardown takes over ten (tens of) seconds.
> > > > > > 2. Brittle. Too many mocks and radical powermocks make tests
> > brittle.
> > > > > > Mockito suggests never mock classes you don't own, but it seems
> > > nobody
> > > > > > cares. We mock over ES and ZK clients and so on.
> > > > > >
> > > > > > I don't want to spread the discussion over tests. But when it
> comes
> > > to
> > > > > > improving tests, I want to share the most emergent items on it.
> > > > > >
> > > > > > Best,
> > > > > > tison.
> > > > > >
> > > > > >
> > > > > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六
> 00:26写道:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > I'd like to start a discussion about turning on CodeCov report
> > for
> > > > PRs
> > > > > to
> > > > > > > master to show the PR's impact on unit test coverage. Previous
> > > > > discussion
> > > > > > > on https://github.com/apache/pulsar/pull/17382.
> > > > > > >
> > > > > > > Proposal:
> > > > > > > 1. Unit test coverage will be added to the CI pipeline and
> > reported
> > > > to
> > > > > > the
> > > > > > > PR page.
> > > > > > > Sample report:
> > > > > > >
> > > > > > > @@            Coverage Diff            @@##             master
> > > > > > > #17382   +/-   ##
> > > > > > > =========================================
> > > > > > >   Coverage          ?   32.10%
> > > > > > >   Complexity        ?     4141
> > > > > > > =========================================
> > > > > > >   Files             ?      387
> > > > > > >   Lines             ?    42806
> > > > > > >   Branches          ?     4420
> > > > > > > =========================================
> > > > > > >   Hits              ?    13741
> > > > > > >   Misses            ?    27032
> > > > > > >   Partials          ?     2033
> > > > > > >
> > > > > > > 2. The report will serve as additional input for the reviewers.
> > The
> > > > > > > requester is expected to explain any significant negative
> impact.
> > > > > > >
> > > > > > > Why?
> > > > > > >
> > > > > > >
> > > > > > >    1. The existing code coverage for Pulsar is very poor at
> just
> > > > above
> > > > > > > 50%. Reasonable expectation for libraries is 90% and 70 or 80%
> > for
> > > > the
> > > > > > > broker. We are at 60% and 50%.
> > > > > > >    2. The coverage report would prevent coverage from getting
> > worse
> > > > > > > and bring the conversation on the table.
> > > > > > >    3. Even though code coverage has its limitation in assessing
> > > test
> > > > > > > coverage, it's the only viable tool.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Thoughts?
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > > yaalsn
> > >
> >
>

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by Lin Zhao <li...@streamnative.io.INVALID>.
Hi Xiangying,

I totally agree that when new feature or major optimizations are submitted,
it should be in the owner's interest to care about code coverage for their
new feature. The test cases are protection of their code from unintended
side effects from other people's work.

In the case the PR is a bug fix, we should attempt to add a unit test to
cover this bug, which ideally increases the coverage. At the minimum, it
should not add complex uncovered code and significantly decrease the
coverage.

I think both the button to run code coverage, and the report in the PR has
value. I support both.

Lin

On Fri, Sep 9, 2022 at 6:44 PM Xiangying Meng <xi...@apache.org> wrote:

> Hi,
> IMO, there are two scenarios when users need test coverage:
> 1. When new features or major optimizations are submitted, the owners of
> the PR may care about the code coverage of these new features.
> 2. When we need to optimize the original code to increase the overall code
> test coverage of Pulsar.
> And there are two scenarios when users don't need test coverage:
> 1. The PR only is a Bug Fix
> 2. There is a small local optimization in this PR
> In fact, most PRs for Pulsar right now are bug fixes or small
> optimizations. The PR owner does not need to consider test coverage.
>
> So, my suggestion is that we can provide a button to run code coverage and
> download coverage reports.
>  And put this button on the CI detail page, so that people who really need
> it can get the code test coverage without affecting the running of most
> other CIs.
>
> Sincerely,
> Xiangying
>
> On Sat, Sep 10, 2022 at 5:13 AM asn <ya...@gmail.com> wrote:
>
> > Hi,
> >
> > IMO, code coverage is not just a number, 50% or 70% makes no sense except
> > to let us feel that we have more confidence. So what is really
> important? I
> > think it is the *coverage* itself, we need to see where we need to write
> > tests in the future based the result because only if we have this data,
> we
> > could know where to cover. Furthermore, we can force every pr to write
> high
> > quality test.
> >
> > tison <wa...@gmail.com> 于2022年9月10日周六 01:37写道:
> >
> > > > Please provide data point its impact to CI stability.
> > >
> > > If you take a look at https://github.com/apache/pulsar/pull/17382, it
> > > changes pulsar-ci.yml to run commands for generating the codecov
> report.
> > > I'm unsure of the impact of a new step, it may not affect too much
> since
> > > there's already a runner take the whole job.
> > >
> > > For the rest, I wrote:
> > >
> > > > I read the output in your proposal as simply:
> > > >> The report will serve as additional input for the reviewers. The
> > > requester
> > > is expected to explain any significant negative impact.
> > > > This works for me as a nonmandatory helper.
> > >
> > > Feelings can be biased. No objection to the action.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:
> > >
> > > > Hi Tison,
> > > >
> > > > Thanks for your input. I agree that the community should focus on the
> > > > priorities regarding CI that you mentioned. At the same time, I'm
> > having
> > > a
> > > > hard time understanding the negative impact that you suggested from
> > this
> > > > change.
> > > >
> > > > 1. To my knowledge code coverage calculation adds little overhead to
> > ci.
> > > > Please provide data point its impact to CI stability.
> > > > 2. Code coverage isn't completely accurate and yet it's valuable data
> > > > point. The goal is not to reach 100%, but to prevent coverage
> becoming
> > > > worse. To me this change has only benefits and not drawbacks.
> > > > 3. I don't agree with your sentiments about code coverage. There's
> > > material
> > > > difference between 50% and 70% and we are at the low end of 50%. *We
> > will
> > > > not add a commit gate to coverage change. *The report serves as an
> > > > additional data point for the approvers.
> > > >
> > > > Lin
> > > >
> > > >
> > > > On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com> wrote:
> > > >
> > > > > Hi Lin,
> > > > >
> > > > > Thanks for starting this discussion!
> > > > >
> > > > > As long as it takes a different resource set from current CI tasks,
> > I'm
> > > > +0
> > > > > as commented on PR-17382. I hardly read the report.
> > > > >
> > > > > I read the output in your proposal as simply:
> > > > >
> > > > > > The report will serve as additional input for the reviewers. The
> > > > > requester
> > > > > is expected to explain any significant negative impact.
> > > > >
> > > > > This works for me as a nonmandatory helper.
> > > > >
> > > > > Thoughts on the baseline:
> > > > >
> > > > > 1. I don't like Code Coverage because from my experience they're
> > > _never_
> > > > > precise. And chasing 100% test coverage is unnecessary.
> > > > > 2. Although, if test coverage is below 50%, it _is_ a signal to me
> > that
> > > > we
> > > > > suffer from too few tests.
> > > > > 3. Over 50% or 70%, test coverage looks the same to me.
> > > > >
> > > > > And our tests suffer from:
> > > > >
> > > > > 1. Heavyweight. We're now working on resolving CI congestion. Many
> > > "unit
> > > > > tests" are actually integration tests with mock Pulsar Service etc.
> > > which
> > > > > setup/teardown takes over ten (tens of) seconds.
> > > > > 2. Brittle. Too many mocks and radical powermocks make tests
> brittle.
> > > > > Mockito suggests never mock classes you don't own, but it seems
> > nobody
> > > > > cares. We mock over ES and ZK clients and so on.
> > > > >
> > > > > I don't want to spread the discussion over tests. But when it comes
> > to
> > > > > improving tests, I want to share the most emergent items on it.
> > > > >
> > > > > Best,
> > > > > tison.
> > > > >
> > > > >
> > > > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > I'd like to start a discussion about turning on CodeCov report
> for
> > > PRs
> > > > to
> > > > > > master to show the PR's impact on unit test coverage. Previous
> > > > discussion
> > > > > > on https://github.com/apache/pulsar/pull/17382.
> > > > > >
> > > > > > Proposal:
> > > > > > 1. Unit test coverage will be added to the CI pipeline and
> reported
> > > to
> > > > > the
> > > > > > PR page.
> > > > > > Sample report:
> > > > > >
> > > > > > @@            Coverage Diff            @@##             master
> > > > > > #17382   +/-   ##
> > > > > > =========================================
> > > > > >   Coverage          ?   32.10%
> > > > > >   Complexity        ?     4141
> > > > > > =========================================
> > > > > >   Files             ?      387
> > > > > >   Lines             ?    42806
> > > > > >   Branches          ?     4420
> > > > > > =========================================
> > > > > >   Hits              ?    13741
> > > > > >   Misses            ?    27032
> > > > > >   Partials          ?     2033
> > > > > >
> > > > > > 2. The report will serve as additional input for the reviewers.
> The
> > > > > > requester is expected to explain any significant negative impact.
> > > > > >
> > > > > > Why?
> > > > > >
> > > > > >
> > > > > >    1. The existing code coverage for Pulsar is very poor at just
> > > above
> > > > > > 50%. Reasonable expectation for libraries is 90% and 70 or 80%
> for
> > > the
> > > > > > broker. We are at 60% and 50%.
> > > > > >    2. The coverage report would prevent coverage from getting
> worse
> > > > > > and bring the conversation on the table.
> > > > > >    3. Even though code coverage has its limitation in assessing
> > test
> > > > > > coverage, it's the only viable tool.
> > > > > >
> > > > > >
> > > > > >
> > > > > > Thoughts?
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > yaalsn
> >
>

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by Xiangying Meng <xi...@apache.org>.
Hi,
IMO, there are two scenarios when users need test coverage:
1. When new features or major optimizations are submitted, the owners of
the PR may care about the code coverage of these new features.
2. When we need to optimize the original code to increase the overall code
test coverage of Pulsar.
And there are two scenarios when users don't need test coverage:
1. The PR only is a Bug Fix
2. There is a small local optimization in this PR
In fact, most PRs for Pulsar right now are bug fixes or small
optimizations. The PR owner does not need to consider test coverage.

So, my suggestion is that we can provide a button to run code coverage and
download coverage reports.
 And put this button on the CI detail page, so that people who really need
it can get the code test coverage without affecting the running of most
other CIs.

Sincerely,
Xiangying

On Sat, Sep 10, 2022 at 5:13 AM asn <ya...@gmail.com> wrote:

> Hi,
>
> IMO, code coverage is not just a number, 50% or 70% makes no sense except
> to let us feel that we have more confidence. So what is really important? I
> think it is the *coverage* itself, we need to see where we need to write
> tests in the future based the result because only if we have this data, we
> could know where to cover. Furthermore, we can force every pr to write high
> quality test.
>
> tison <wa...@gmail.com> 于2022年9月10日周六 01:37写道:
>
> > > Please provide data point its impact to CI stability.
> >
> > If you take a look at https://github.com/apache/pulsar/pull/17382, it
> > changes pulsar-ci.yml to run commands for generating the codecov report.
> > I'm unsure of the impact of a new step, it may not affect too much since
> > there's already a runner take the whole job.
> >
> > For the rest, I wrote:
> >
> > > I read the output in your proposal as simply:
> > >> The report will serve as additional input for the reviewers. The
> > requester
> > is expected to explain any significant negative impact.
> > > This works for me as a nonmandatory helper.
> >
> > Feelings can be biased. No objection to the action.
> >
> > Best,
> > tison.
> >
> >
> > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:
> >
> > > Hi Tison,
> > >
> > > Thanks for your input. I agree that the community should focus on the
> > > priorities regarding CI that you mentioned. At the same time, I'm
> having
> > a
> > > hard time understanding the negative impact that you suggested from
> this
> > > change.
> > >
> > > 1. To my knowledge code coverage calculation adds little overhead to
> ci.
> > > Please provide data point its impact to CI stability.
> > > 2. Code coverage isn't completely accurate and yet it's valuable data
> > > point. The goal is not to reach 100%, but to prevent coverage becoming
> > > worse. To me this change has only benefits and not drawbacks.
> > > 3. I don't agree with your sentiments about code coverage. There's
> > material
> > > difference between 50% and 70% and we are at the low end of 50%. *We
> will
> > > not add a commit gate to coverage change. *The report serves as an
> > > additional data point for the approvers.
> > >
> > > Lin
> > >
> > >
> > > On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com> wrote:
> > >
> > > > Hi Lin,
> > > >
> > > > Thanks for starting this discussion!
> > > >
> > > > As long as it takes a different resource set from current CI tasks,
> I'm
> > > +0
> > > > as commented on PR-17382. I hardly read the report.
> > > >
> > > > I read the output in your proposal as simply:
> > > >
> > > > > The report will serve as additional input for the reviewers. The
> > > > requester
> > > > is expected to explain any significant negative impact.
> > > >
> > > > This works for me as a nonmandatory helper.
> > > >
> > > > Thoughts on the baseline:
> > > >
> > > > 1. I don't like Code Coverage because from my experience they're
> > _never_
> > > > precise. And chasing 100% test coverage is unnecessary.
> > > > 2. Although, if test coverage is below 50%, it _is_ a signal to me
> that
> > > we
> > > > suffer from too few tests.
> > > > 3. Over 50% or 70%, test coverage looks the same to me.
> > > >
> > > > And our tests suffer from:
> > > >
> > > > 1. Heavyweight. We're now working on resolving CI congestion. Many
> > "unit
> > > > tests" are actually integration tests with mock Pulsar Service etc.
> > which
> > > > setup/teardown takes over ten (tens of) seconds.
> > > > 2. Brittle. Too many mocks and radical powermocks make tests brittle.
> > > > Mockito suggests never mock classes you don't own, but it seems
> nobody
> > > > cares. We mock over ES and ZK clients and so on.
> > > >
> > > > I don't want to spread the discussion over tests. But when it comes
> to
> > > > improving tests, I want to share the most emergent items on it.
> > > >
> > > > Best,
> > > > tison.
> > > >
> > > >
> > > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
> > > >
> > > > > Hi,
> > > > >
> > > > > I'd like to start a discussion about turning on CodeCov report for
> > PRs
> > > to
> > > > > master to show the PR's impact on unit test coverage. Previous
> > > discussion
> > > > > on https://github.com/apache/pulsar/pull/17382.
> > > > >
> > > > > Proposal:
> > > > > 1. Unit test coverage will be added to the CI pipeline and reported
> > to
> > > > the
> > > > > PR page.
> > > > > Sample report:
> > > > >
> > > > > @@            Coverage Diff            @@##             master
> > > > > #17382   +/-   ##
> > > > > =========================================
> > > > >   Coverage          ?   32.10%
> > > > >   Complexity        ?     4141
> > > > > =========================================
> > > > >   Files             ?      387
> > > > >   Lines             ?    42806
> > > > >   Branches          ?     4420
> > > > > =========================================
> > > > >   Hits              ?    13741
> > > > >   Misses            ?    27032
> > > > >   Partials          ?     2033
> > > > >
> > > > > 2. The report will serve as additional input for the reviewers. The
> > > > > requester is expected to explain any significant negative impact.
> > > > >
> > > > > Why?
> > > > >
> > > > >
> > > > >    1. The existing code coverage for Pulsar is very poor at just
> > above
> > > > > 50%. Reasonable expectation for libraries is 90% and 70 or 80% for
> > the
> > > > > broker. We are at 60% and 50%.
> > > > >    2. The coverage report would prevent coverage from getting worse
> > > > > and bring the conversation on the table.
> > > > >    3. Even though code coverage has its limitation in assessing
> test
> > > > > coverage, it's the only viable tool.
> > > > >
> > > > >
> > > > >
> > > > > Thoughts?
> > > > >
> > > >
> > >
> >
>
>
> --
> yaalsn
>

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by asn <ya...@gmail.com>.
Hi,

IMO, code coverage is not just a number, 50% or 70% makes no sense except
to let us feel that we have more confidence. So what is really important? I
think it is the *coverage* itself, we need to see where we need to write
tests in the future based the result because only if we have this data, we
could know where to cover. Furthermore, we can force every pr to write high
quality test.

tison <wa...@gmail.com> 于2022年9月10日周六 01:37写道:

> > Please provide data point its impact to CI stability.
>
> If you take a look at https://github.com/apache/pulsar/pull/17382, it
> changes pulsar-ci.yml to run commands for generating the codecov report.
> I'm unsure of the impact of a new step, it may not affect too much since
> there's already a runner take the whole job.
>
> For the rest, I wrote:
>
> > I read the output in your proposal as simply:
> >> The report will serve as additional input for the reviewers. The
> requester
> is expected to explain any significant negative impact.
> > This works for me as a nonmandatory helper.
>
> Feelings can be biased. No objection to the action.
>
> Best,
> tison.
>
>
> Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:
>
> > Hi Tison,
> >
> > Thanks for your input. I agree that the community should focus on the
> > priorities regarding CI that you mentioned. At the same time, I'm having
> a
> > hard time understanding the negative impact that you suggested from this
> > change.
> >
> > 1. To my knowledge code coverage calculation adds little overhead to ci.
> > Please provide data point its impact to CI stability.
> > 2. Code coverage isn't completely accurate and yet it's valuable data
> > point. The goal is not to reach 100%, but to prevent coverage becoming
> > worse. To me this change has only benefits and not drawbacks.
> > 3. I don't agree with your sentiments about code coverage. There's
> material
> > difference between 50% and 70% and we are at the low end of 50%. *We will
> > not add a commit gate to coverage change. *The report serves as an
> > additional data point for the approvers.
> >
> > Lin
> >
> >
> > On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com> wrote:
> >
> > > Hi Lin,
> > >
> > > Thanks for starting this discussion!
> > >
> > > As long as it takes a different resource set from current CI tasks, I'm
> > +0
> > > as commented on PR-17382. I hardly read the report.
> > >
> > > I read the output in your proposal as simply:
> > >
> > > > The report will serve as additional input for the reviewers. The
> > > requester
> > > is expected to explain any significant negative impact.
> > >
> > > This works for me as a nonmandatory helper.
> > >
> > > Thoughts on the baseline:
> > >
> > > 1. I don't like Code Coverage because from my experience they're
> _never_
> > > precise. And chasing 100% test coverage is unnecessary.
> > > 2. Although, if test coverage is below 50%, it _is_ a signal to me that
> > we
> > > suffer from too few tests.
> > > 3. Over 50% or 70%, test coverage looks the same to me.
> > >
> > > And our tests suffer from:
> > >
> > > 1. Heavyweight. We're now working on resolving CI congestion. Many
> "unit
> > > tests" are actually integration tests with mock Pulsar Service etc.
> which
> > > setup/teardown takes over ten (tens of) seconds.
> > > 2. Brittle. Too many mocks and radical powermocks make tests brittle.
> > > Mockito suggests never mock classes you don't own, but it seems nobody
> > > cares. We mock over ES and ZK clients and so on.
> > >
> > > I don't want to spread the discussion over tests. But when it comes to
> > > improving tests, I want to share the most emergent items on it.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
> > >
> > > > Hi,
> > > >
> > > > I'd like to start a discussion about turning on CodeCov report for
> PRs
> > to
> > > > master to show the PR's impact on unit test coverage. Previous
> > discussion
> > > > on https://github.com/apache/pulsar/pull/17382.
> > > >
> > > > Proposal:
> > > > 1. Unit test coverage will be added to the CI pipeline and reported
> to
> > > the
> > > > PR page.
> > > > Sample report:
> > > >
> > > > @@            Coverage Diff            @@##             master
> > > > #17382   +/-   ##
> > > > =========================================
> > > >   Coverage          ?   32.10%
> > > >   Complexity        ?     4141
> > > > =========================================
> > > >   Files             ?      387
> > > >   Lines             ?    42806
> > > >   Branches          ?     4420
> > > > =========================================
> > > >   Hits              ?    13741
> > > >   Misses            ?    27032
> > > >   Partials          ?     2033
> > > >
> > > > 2. The report will serve as additional input for the reviewers. The
> > > > requester is expected to explain any significant negative impact.
> > > >
> > > > Why?
> > > >
> > > >
> > > >    1. The existing code coverage for Pulsar is very poor at just
> above
> > > > 50%. Reasonable expectation for libraries is 90% and 70 or 80% for
> the
> > > > broker. We are at 60% and 50%.
> > > >    2. The coverage report would prevent coverage from getting worse
> > > > and bring the conversation on the table.
> > > >    3. Even though code coverage has its limitation in assessing test
> > > > coverage, it's the only viable tool.
> > > >
> > > >
> > > >
> > > > Thoughts?
> > > >
> > >
> >
>


-- 
yaalsn

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by tison <wa...@gmail.com>.
> Please provide data point its impact to CI stability.

If you take a look at https://github.com/apache/pulsar/pull/17382, it
changes pulsar-ci.yml to run commands for generating the codecov report.
I'm unsure of the impact of a new step, it may not affect too much since
there's already a runner take the whole job.

For the rest, I wrote:

> I read the output in your proposal as simply:
>> The report will serve as additional input for the reviewers. The requester
is expected to explain any significant negative impact.
> This works for me as a nonmandatory helper.

Feelings can be biased. No objection to the action.

Best,
tison.


Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 01:26写道:

> Hi Tison,
>
> Thanks for your input. I agree that the community should focus on the
> priorities regarding CI that you mentioned. At the same time, I'm having a
> hard time understanding the negative impact that you suggested from this
> change.
>
> 1. To my knowledge code coverage calculation adds little overhead to ci.
> Please provide data point its impact to CI stability.
> 2. Code coverage isn't completely accurate and yet it's valuable data
> point. The goal is not to reach 100%, but to prevent coverage becoming
> worse. To me this change has only benefits and not drawbacks.
> 3. I don't agree with your sentiments about code coverage. There's material
> difference between 50% and 70% and we are at the low end of 50%. *We will
> not add a commit gate to coverage change. *The report serves as an
> additional data point for the approvers.
>
> Lin
>
>
> On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com> wrote:
>
> > Hi Lin,
> >
> > Thanks for starting this discussion!
> >
> > As long as it takes a different resource set from current CI tasks, I'm
> +0
> > as commented on PR-17382. I hardly read the report.
> >
> > I read the output in your proposal as simply:
> >
> > > The report will serve as additional input for the reviewers. The
> > requester
> > is expected to explain any significant negative impact.
> >
> > This works for me as a nonmandatory helper.
> >
> > Thoughts on the baseline:
> >
> > 1. I don't like Code Coverage because from my experience they're _never_
> > precise. And chasing 100% test coverage is unnecessary.
> > 2. Although, if test coverage is below 50%, it _is_ a signal to me that
> we
> > suffer from too few tests.
> > 3. Over 50% or 70%, test coverage looks the same to me.
> >
> > And our tests suffer from:
> >
> > 1. Heavyweight. We're now working on resolving CI congestion. Many "unit
> > tests" are actually integration tests with mock Pulsar Service etc. which
> > setup/teardown takes over ten (tens of) seconds.
> > 2. Brittle. Too many mocks and radical powermocks make tests brittle.
> > Mockito suggests never mock classes you don't own, but it seems nobody
> > cares. We mock over ES and ZK clients and so on.
> >
> > I don't want to spread the discussion over tests. But when it comes to
> > improving tests, I want to share the most emergent items on it.
> >
> > Best,
> > tison.
> >
> >
> > Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
> >
> > > Hi,
> > >
> > > I'd like to start a discussion about turning on CodeCov report for PRs
> to
> > > master to show the PR's impact on unit test coverage. Previous
> discussion
> > > on https://github.com/apache/pulsar/pull/17382.
> > >
> > > Proposal:
> > > 1. Unit test coverage will be added to the CI pipeline and reported to
> > the
> > > PR page.
> > > Sample report:
> > >
> > > @@            Coverage Diff            @@##             master
> > > #17382   +/-   ##
> > > =========================================
> > >   Coverage          ?   32.10%
> > >   Complexity        ?     4141
> > > =========================================
> > >   Files             ?      387
> > >   Lines             ?    42806
> > >   Branches          ?     4420
> > > =========================================
> > >   Hits              ?    13741
> > >   Misses            ?    27032
> > >   Partials          ?     2033
> > >
> > > 2. The report will serve as additional input for the reviewers. The
> > > requester is expected to explain any significant negative impact.
> > >
> > > Why?
> > >
> > >
> > >    1. The existing code coverage for Pulsar is very poor at just above
> > > 50%. Reasonable expectation for libraries is 90% and 70 or 80% for the
> > > broker. We are at 60% and 50%.
> > >    2. The coverage report would prevent coverage from getting worse
> > > and bring the conversation on the table.
> > >    3. Even though code coverage has its limitation in assessing test
> > > coverage, it's the only viable tool.
> > >
> > >
> > >
> > > Thoughts?
> > >
> >
>

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by Lin Zhao <li...@streamnative.io.INVALID>.
Hi Tison,

Thanks for your input. I agree that the community should focus on the
priorities regarding CI that you mentioned. At the same time, I'm having a
hard time understanding the negative impact that you suggested from this
change.

1. To my knowledge code coverage calculation adds little overhead to ci.
Please provide data point its impact to CI stability.
2. Code coverage isn't completely accurate and yet it's valuable data
point. The goal is not to reach 100%, but to prevent coverage becoming
worse. To me this change has only benefits and not drawbacks.
3. I don't agree with your sentiments about code coverage. There's material
difference between 50% and 70% and we are at the low end of 50%. *We will
not add a commit gate to coverage change. *The report serves as an
additional data point for the approvers.

Lin


On Fri, Sep 9, 2022 at 9:51 AM tison <wa...@gmail.com> wrote:

> Hi Lin,
>
> Thanks for starting this discussion!
>
> As long as it takes a different resource set from current CI tasks, I'm +0
> as commented on PR-17382. I hardly read the report.
>
> I read the output in your proposal as simply:
>
> > The report will serve as additional input for the reviewers. The
> requester
> is expected to explain any significant negative impact.
>
> This works for me as a nonmandatory helper.
>
> Thoughts on the baseline:
>
> 1. I don't like Code Coverage because from my experience they're _never_
> precise. And chasing 100% test coverage is unnecessary.
> 2. Although, if test coverage is below 50%, it _is_ a signal to me that we
> suffer from too few tests.
> 3. Over 50% or 70%, test coverage looks the same to me.
>
> And our tests suffer from:
>
> 1. Heavyweight. We're now working on resolving CI congestion. Many "unit
> tests" are actually integration tests with mock Pulsar Service etc. which
> setup/teardown takes over ten (tens of) seconds.
> 2. Brittle. Too many mocks and radical powermocks make tests brittle.
> Mockito suggests never mock classes you don't own, but it seems nobody
> cares. We mock over ES and ZK clients and so on.
>
> I don't want to spread the discussion over tests. But when it comes to
> improving tests, I want to share the most emergent items on it.
>
> Best,
> tison.
>
>
> Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:
>
> > Hi,
> >
> > I'd like to start a discussion about turning on CodeCov report for PRs to
> > master to show the PR's impact on unit test coverage. Previous discussion
> > on https://github.com/apache/pulsar/pull/17382.
> >
> > Proposal:
> > 1. Unit test coverage will be added to the CI pipeline and reported to
> the
> > PR page.
> > Sample report:
> >
> > @@            Coverage Diff            @@##             master
> > #17382   +/-   ##
> > =========================================
> >   Coverage          ?   32.10%
> >   Complexity        ?     4141
> > =========================================
> >   Files             ?      387
> >   Lines             ?    42806
> >   Branches          ?     4420
> > =========================================
> >   Hits              ?    13741
> >   Misses            ?    27032
> >   Partials          ?     2033
> >
> > 2. The report will serve as additional input for the reviewers. The
> > requester is expected to explain any significant negative impact.
> >
> > Why?
> >
> >
> >    1. The existing code coverage for Pulsar is very poor at just above
> > 50%. Reasonable expectation for libraries is 90% and 70 or 80% for the
> > broker. We are at 60% and 50%.
> >    2. The coverage report would prevent coverage from getting worse
> > and bring the conversation on the table.
> >    3. Even though code coverage has its limitation in assessing test
> > coverage, it's the only viable tool.
> >
> >
> >
> > Thoughts?
> >
>

Re: [DISSCUSS] Code coverage report for PRs to master

Posted by tison <wa...@gmail.com>.
Hi Lin,

Thanks for starting this discussion!

As long as it takes a different resource set from current CI tasks, I'm +0
as commented on PR-17382. I hardly read the report.

I read the output in your proposal as simply:

> The report will serve as additional input for the reviewers. The requester
is expected to explain any significant negative impact.

This works for me as a nonmandatory helper.

Thoughts on the baseline:

1. I don't like Code Coverage because from my experience they're _never_
precise. And chasing 100% test coverage is unnecessary.
2. Although, if test coverage is below 50%, it _is_ a signal to me that we
suffer from too few tests.
3. Over 50% or 70%, test coverage looks the same to me.

And our tests suffer from:

1. Heavyweight. We're now working on resolving CI congestion. Many "unit
tests" are actually integration tests with mock Pulsar Service etc. which
setup/teardown takes over ten (tens of) seconds.
2. Brittle. Too many mocks and radical powermocks make tests brittle.
Mockito suggests never mock classes you don't own, but it seems nobody
cares. We mock over ES and ZK clients and so on.

I don't want to spread the discussion over tests. But when it comes to
improving tests, I want to share the most emergent items on it.

Best,
tison.


Lin Zhao <li...@streamnative.io.invalid> 于2022年9月10日周六 00:26写道:

> Hi,
>
> I'd like to start a discussion about turning on CodeCov report for PRs to
> master to show the PR's impact on unit test coverage. Previous discussion
> on https://github.com/apache/pulsar/pull/17382.
>
> Proposal:
> 1. Unit test coverage will be added to the CI pipeline and reported to the
> PR page.
> Sample report:
>
> @@            Coverage Diff            @@##             master
> #17382   +/-   ##
> =========================================
>   Coverage          ?   32.10%
>   Complexity        ?     4141
> =========================================
>   Files             ?      387
>   Lines             ?    42806
>   Branches          ?     4420
> =========================================
>   Hits              ?    13741
>   Misses            ?    27032
>   Partials          ?     2033
>
> 2. The report will serve as additional input for the reviewers. The
> requester is expected to explain any significant negative impact.
>
> Why?
>
>
>    1. The existing code coverage for Pulsar is very poor at just above
> 50%. Reasonable expectation for libraries is 90% and 70 or 80% for the
> broker. We are at 60% and 50%.
>    2. The coverage report would prevent coverage from getting worse
> and bring the conversation on the table.
>    3. Even though code coverage has its limitation in assessing test
> coverage, it's the only viable tool.
>
>
>
> Thoughts?
>