You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Ufuk Celebi <uc...@apache.org> on 2015/06/04 00:32:26 UTC

Failing tests policy

Hey all,

we have certain test cases, which are failing regularly on Travis. In all
cases I can think of we just keep the test activated.

I think this makes it very hard for regular contributors to take these
failures seriously. I think the following situation is not unrealistic with
the current policy: I know that test X is failing. I don't know that person
Y fixed this test. I see test X failing (again for a different reason) and
think that it is a "known issue".

I think a better policy is to just disable the test, assign someone to fix
it, and then only enable it again after someone has fixed it.

Is this reasonable? Or do we have good reasons to keep such tests (there
are currently one or two) activated?

– Ufuk

Re: Failing tests policy

Posted by fh...@gmail.com.
The tests that Ufuk is referring to are not deterministically failing. This is about hard to debug and hard to fix tests where it is not clear who broke them. 

Fixing such a test can take a several days or even more… So locking the master branch is not an option IMO. 


Deactivating the tests will lower the test coverage such that bugs that would have been caught by an test that infrequently fails for another reason, are not identified.


How about opening JIRAs for non-deterministically failing tests and assigning a special label for that. Whenever, a test fails one can check if this is a known issue and act correspondingly.






From: Matthias J. Sax
Sent: ‎Thursday‎, ‎4‎. ‎June‎, ‎2015 ‎09‎:‎06
To: dev@flink.apache.org





I think, people should be forced to fixed failing tests asap. One way to
go, could be to lock the master branch until the test is fixed. If
nobody can push to the master, pressure is very high for the responsible
developer to get it done asap. Not sure if this is Apache compatible.

Just a thought (from industry experience).


On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
> I tend to agree with Ufuk, although it would be nice to fix them very quickly.
> 
> On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org> wrote:
>> @matthias: That is the implicit policy right now. Seems not to work...
>>
>> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
>> mjsax@informatik.hu-berlin.de> wrote:
>>
>>> I basically agree that the current policy on not optimal. However, I
>>> would rather give failing tests "top priority" to get fixed (if possible
>>> within one/a-few days) and not disable them.
>>>
>>> -Matthias
>>>
>>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
>>>> Hey all,
>>>>
>>>> we have certain test cases, which are failing regularly on Travis. In all
>>>> cases I can think of we just keep the test activated.
>>>>
>>>> I think this makes it very hard for regular contributors to take these
>>>> failures seriously. I think the following situation is not unrealistic
>>> with
>>>> the current policy: I know that test X is failing. I don't know that
>>> person
>>>> Y fixed this test. I see test X failing (again for a different reason)
>>> and
>>>> think that it is a "known issue".
>>>>
>>>> I think a better policy is to just disable the test, assign someone to
>>> fix
>>>> it, and then only enable it again after someone has fixed it.
>>>>
>>>> Is this reasonable? Or do we have good reasons to keep such tests (there
>>>> are currently one or two) activated?
>>>>
>>>> – Ufuk
>>>>
>>>
>>>
>

Re: Failing tests policy

Posted by "Matthias J. Sax" <mj...@informatik.hu-berlin.de>.
I agree. It does not help with the current unstable tests. However, I
can help to prevent to run into instability issues in the future.

On 06/04/2015 11:58 AM, Fabian Hueske wrote:
> I think the problem is less with bugs being introduced by new commits but
> rather bugs which are already in the code base.
> 
> 2015-06-04 11:52 GMT+02:00 Matthias J. Sax <mj...@informatik.hu-berlin.de>:
> 
>> I have another idea: the problem is, that some commit might de-stabilize
>> a former stable test. This in not detected, because the build was
>> ("accidentally") green and the code in merged.
>>
>> We could reduce the probability that this happens, if a pull request
>> must pass the test-run multiple times (maybe 5x). Of course, it takes
>> much time to run all test on Travis such often and increases the time
>> until something can be merged. But it might be worth the effort.
>>
>> Options on that?
>>
>> On 06/04/2015 11:35 AM, Ufuk Celebi wrote:
>>> Thanks for the feedback and the suggestions.
>>>
>>> As Stephan said, the "we have to fix it asap" usually does not work
>> well. I think blocking master is not an option, exactly for the reasons
>> that Fabian and Till outlined.
>>>
>>> From the comments so far, I don't feel like we are eager to adapt a
>> disable policy.
>>>
>>> I still think it is a better policy. I think we actually don't decrease
>> test coverage by disabling a flakey test, but increase it. For example the
>> KafkaITCase is in one of the modules, which is tested in the middle of a
>> build. If it fails (as it does sometimes), a lot of later tests don't run.
>> I'm not sure if we have the time (or discipline) to trigger a 1hr build
>> again when a known-to-fail test is failing and 4 of the other builds are
>> succeeding.
>>>
>>> – Ufuk
>>>
>>> On 04 Jun 2015, at 09:25, Till Rohrmann <ti...@gmail.com> wrote:
>>>
>>>> I'm also in favour of quickly fixing the failing test cases but I think
>>>> that blocking the master is a kind of drastic measure. IMO this creates
>> a
>>>> culture of blaming someone whereas I would prefer a more proactive
>>>> approach. When you see a failing test case and know that someone
>> recently
>>>> worked on it, then ping him because maybe he can quickly fix it or knows
>>>> about it. If he's not available, e.g. holidays, busy with other stuff,
>>>> etc., then maybe one can investigate the problem oneself and fix it.
>>>>
>>>> But this is basically our current approach and I don't know how to
>> enforce
>>>> this policy by some means. Maybe it's making people more aware of it and
>>>> motivating people to have a stable master.
>>>>
>>>> Cheers,
>>>> Till
>>>>
>>>> On Thu, Jun 4, 2015 at 9:06 AM, Matthias J. Sax <
>>>> mjsax@informatik.hu-berlin.de> wrote:
>>>>
>>>>> I think, people should be forced to fixed failing tests asap. One way
>> to
>>>>> go, could be to lock the master branch until the test is fixed. If
>>>>> nobody can push to the master, pressure is very high for the
>> responsible
>>>>> developer to get it done asap. Not sure if this is Apache compatible.
>>>>>
>>>>> Just a thought (from industry experience).
>>>>>
>>>>>
>>>>> On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
>>>>>> I tend to agree with Ufuk, although it would be nice to fix them very
>>>>> quickly.
>>>>>>
>>>>>> On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org>
>> wrote:
>>>>>>> @matthias: That is the implicit policy right now. Seems not to
>> work...
>>>>>>>
>>>>>>> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
>>>>>>> mjsax@informatik.hu-berlin.de> wrote:
>>>>>>>
>>>>>>>> I basically agree that the current policy on not optimal. However, I
>>>>>>>> would rather give failing tests "top priority" to get fixed (if
>>>>> possible
>>>>>>>> within one/a-few days) and not disable them.
>>>>>>>>
>>>>>>>> -Matthias
>>>>>>>>
>>>>>>>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
>>>>>>>>> Hey all,
>>>>>>>>>
>>>>>>>>> we have certain test cases, which are failing regularly on Travis.
>> In
>>>>> all
>>>>>>>>> cases I can think of we just keep the test activated.
>>>>>>>>>
>>>>>>>>> I think this makes it very hard for regular contributors to take
>> these
>>>>>>>>> failures seriously. I think the following situation is not
>> unrealistic
>>>>>>>> with
>>>>>>>>> the current policy: I know that test X is failing. I don't know
>> that
>>>>>>>> person
>>>>>>>>> Y fixed this test. I see test X failing (again for a different
>> reason)
>>>>>>>> and
>>>>>>>>> think that it is a "known issue".
>>>>>>>>>
>>>>>>>>> I think a better policy is to just disable the test, assign
>> someone to
>>>>>>>> fix
>>>>>>>>> it, and then only enable it again after someone has fixed it.
>>>>>>>>>
>>>>>>>>> Is this reasonable? Or do we have good reasons to keep such tests
>>>>> (there
>>>>>>>>> are currently one or two) activated?
>>>>>>>>>
>>>>>>>>> – Ufuk
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>
>>>
>>
>>
> 


Re: Failing tests policy

Posted by Fabian Hueske <fh...@gmail.com>.
I think the problem is less with bugs being introduced by new commits but
rather bugs which are already in the code base.

2015-06-04 11:52 GMT+02:00 Matthias J. Sax <mj...@informatik.hu-berlin.de>:

> I have another idea: the problem is, that some commit might de-stabilize
> a former stable test. This in not detected, because the build was
> ("accidentally") green and the code in merged.
>
> We could reduce the probability that this happens, if a pull request
> must pass the test-run multiple times (maybe 5x). Of course, it takes
> much time to run all test on Travis such often and increases the time
> until something can be merged. But it might be worth the effort.
>
> Options on that?
>
> On 06/04/2015 11:35 AM, Ufuk Celebi wrote:
> > Thanks for the feedback and the suggestions.
> >
> > As Stephan said, the "we have to fix it asap" usually does not work
> well. I think blocking master is not an option, exactly for the reasons
> that Fabian and Till outlined.
> >
> > From the comments so far, I don't feel like we are eager to adapt a
> disable policy.
> >
> > I still think it is a better policy. I think we actually don't decrease
> test coverage by disabling a flakey test, but increase it. For example the
> KafkaITCase is in one of the modules, which is tested in the middle of a
> build. If it fails (as it does sometimes), a lot of later tests don't run.
> I'm not sure if we have the time (or discipline) to trigger a 1hr build
> again when a known-to-fail test is failing and 4 of the other builds are
> succeeding.
> >
> > – Ufuk
> >
> > On 04 Jun 2015, at 09:25, Till Rohrmann <ti...@gmail.com> wrote:
> >
> >> I'm also in favour of quickly fixing the failing test cases but I think
> >> that blocking the master is a kind of drastic measure. IMO this creates
> a
> >> culture of blaming someone whereas I would prefer a more proactive
> >> approach. When you see a failing test case and know that someone
> recently
> >> worked on it, then ping him because maybe he can quickly fix it or knows
> >> about it. If he's not available, e.g. holidays, busy with other stuff,
> >> etc., then maybe one can investigate the problem oneself and fix it.
> >>
> >> But this is basically our current approach and I don't know how to
> enforce
> >> this policy by some means. Maybe it's making people more aware of it and
> >> motivating people to have a stable master.
> >>
> >> Cheers,
> >> Till
> >>
> >> On Thu, Jun 4, 2015 at 9:06 AM, Matthias J. Sax <
> >> mjsax@informatik.hu-berlin.de> wrote:
> >>
> >>> I think, people should be forced to fixed failing tests asap. One way
> to
> >>> go, could be to lock the master branch until the test is fixed. If
> >>> nobody can push to the master, pressure is very high for the
> responsible
> >>> developer to get it done asap. Not sure if this is Apache compatible.
> >>>
> >>> Just a thought (from industry experience).
> >>>
> >>>
> >>> On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
> >>>> I tend to agree with Ufuk, although it would be nice to fix them very
> >>> quickly.
> >>>>
> >>>> On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org>
> wrote:
> >>>>> @matthias: That is the implicit policy right now. Seems not to
> work...
> >>>>>
> >>>>> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
> >>>>> mjsax@informatik.hu-berlin.de> wrote:
> >>>>>
> >>>>>> I basically agree that the current policy on not optimal. However, I
> >>>>>> would rather give failing tests "top priority" to get fixed (if
> >>> possible
> >>>>>> within one/a-few days) and not disable them.
> >>>>>>
> >>>>>> -Matthias
> >>>>>>
> >>>>>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
> >>>>>>> Hey all,
> >>>>>>>
> >>>>>>> we have certain test cases, which are failing regularly on Travis.
> In
> >>> all
> >>>>>>> cases I can think of we just keep the test activated.
> >>>>>>>
> >>>>>>> I think this makes it very hard for regular contributors to take
> these
> >>>>>>> failures seriously. I think the following situation is not
> unrealistic
> >>>>>> with
> >>>>>>> the current policy: I know that test X is failing. I don't know
> that
> >>>>>> person
> >>>>>>> Y fixed this test. I see test X failing (again for a different
> reason)
> >>>>>> and
> >>>>>>> think that it is a "known issue".
> >>>>>>>
> >>>>>>> I think a better policy is to just disable the test, assign
> someone to
> >>>>>> fix
> >>>>>>> it, and then only enable it again after someone has fixed it.
> >>>>>>>
> >>>>>>> Is this reasonable? Or do we have good reasons to keep such tests
> >>> (there
> >>>>>>> are currently one or two) activated?
> >>>>>>>
> >>>>>>> – Ufuk
> >>>>>>>
> >>>>>>
> >>>>>>
> >>>>
> >>>
> >>>
> >
> >
>
>

Re: Failing tests policy

Posted by "Matthias J. Sax" <mj...@informatik.hu-berlin.de>.
I have another idea: the problem is, that some commit might de-stabilize
a former stable test. This in not detected, because the build was
("accidentally") green and the code in merged.

We could reduce the probability that this happens, if a pull request
must pass the test-run multiple times (maybe 5x). Of course, it takes
much time to run all test on Travis such often and increases the time
until something can be merged. But it might be worth the effort.

Options on that?

On 06/04/2015 11:35 AM, Ufuk Celebi wrote:
> Thanks for the feedback and the suggestions.
> 
> As Stephan said, the "we have to fix it asap" usually does not work well. I think blocking master is not an option, exactly for the reasons that Fabian and Till outlined.
> 
> From the comments so far, I don't feel like we are eager to adapt a disable policy.
> 
> I still think it is a better policy. I think we actually don't decrease test coverage by disabling a flakey test, but increase it. For example the KafkaITCase is in one of the modules, which is tested in the middle of a build. If it fails (as it does sometimes), a lot of later tests don't run. I'm not sure if we have the time (or discipline) to trigger a 1hr build again when a known-to-fail test is failing and 4 of the other builds are succeeding.
> 
> – Ufuk
> 
> On 04 Jun 2015, at 09:25, Till Rohrmann <ti...@gmail.com> wrote:
> 
>> I'm also in favour of quickly fixing the failing test cases but I think
>> that blocking the master is a kind of drastic measure. IMO this creates a
>> culture of blaming someone whereas I would prefer a more proactive
>> approach. When you see a failing test case and know that someone recently
>> worked on it, then ping him because maybe he can quickly fix it or knows
>> about it. If he's not available, e.g. holidays, busy with other stuff,
>> etc., then maybe one can investigate the problem oneself and fix it.
>>
>> But this is basically our current approach and I don't know how to enforce
>> this policy by some means. Maybe it's making people more aware of it and
>> motivating people to have a stable master.
>>
>> Cheers,
>> Till
>>
>> On Thu, Jun 4, 2015 at 9:06 AM, Matthias J. Sax <
>> mjsax@informatik.hu-berlin.de> wrote:
>>
>>> I think, people should be forced to fixed failing tests asap. One way to
>>> go, could be to lock the master branch until the test is fixed. If
>>> nobody can push to the master, pressure is very high for the responsible
>>> developer to get it done asap. Not sure if this is Apache compatible.
>>>
>>> Just a thought (from industry experience).
>>>
>>>
>>> On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
>>>> I tend to agree with Ufuk, although it would be nice to fix them very
>>> quickly.
>>>>
>>>> On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org> wrote:
>>>>> @matthias: That is the implicit policy right now. Seems not to work...
>>>>>
>>>>> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
>>>>> mjsax@informatik.hu-berlin.de> wrote:
>>>>>
>>>>>> I basically agree that the current policy on not optimal. However, I
>>>>>> would rather give failing tests "top priority" to get fixed (if
>>> possible
>>>>>> within one/a-few days) and not disable them.
>>>>>>
>>>>>> -Matthias
>>>>>>
>>>>>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
>>>>>>> Hey all,
>>>>>>>
>>>>>>> we have certain test cases, which are failing regularly on Travis. In
>>> all
>>>>>>> cases I can think of we just keep the test activated.
>>>>>>>
>>>>>>> I think this makes it very hard for regular contributors to take these
>>>>>>> failures seriously. I think the following situation is not unrealistic
>>>>>> with
>>>>>>> the current policy: I know that test X is failing. I don't know that
>>>>>> person
>>>>>>> Y fixed this test. I see test X failing (again for a different reason)
>>>>>> and
>>>>>>> think that it is a "known issue".
>>>>>>>
>>>>>>> I think a better policy is to just disable the test, assign someone to
>>>>>> fix
>>>>>>> it, and then only enable it again after someone has fixed it.
>>>>>>>
>>>>>>> Is this reasonable? Or do we have good reasons to keep such tests
>>> (there
>>>>>>> are currently one or two) activated?
>>>>>>>
>>>>>>> – Ufuk
>>>>>>>
>>>>>>
>>>>>>
>>>>
>>>
>>>
> 
> 


Re: Failing tests policy

Posted by Ufuk Celebi <uc...@apache.org>.
Thanks for the feedback and the suggestions.

As Stephan said, the "we have to fix it asap" usually does not work well. I think blocking master is not an option, exactly for the reasons that Fabian and Till outlined.

From the comments so far, I don't feel like we are eager to adapt a disable policy.

I still think it is a better policy. I think we actually don't decrease test coverage by disabling a flakey test, but increase it. For example the KafkaITCase is in one of the modules, which is tested in the middle of a build. If it fails (as it does sometimes), a lot of later tests don't run. I'm not sure if we have the time (or discipline) to trigger a 1hr build again when a known-to-fail test is failing and 4 of the other builds are succeeding.

– Ufuk

On 04 Jun 2015, at 09:25, Till Rohrmann <ti...@gmail.com> wrote:

> I'm also in favour of quickly fixing the failing test cases but I think
> that blocking the master is a kind of drastic measure. IMO this creates a
> culture of blaming someone whereas I would prefer a more proactive
> approach. When you see a failing test case and know that someone recently
> worked on it, then ping him because maybe he can quickly fix it or knows
> about it. If he's not available, e.g. holidays, busy with other stuff,
> etc., then maybe one can investigate the problem oneself and fix it.
> 
> But this is basically our current approach and I don't know how to enforce
> this policy by some means. Maybe it's making people more aware of it and
> motivating people to have a stable master.
> 
> Cheers,
> Till
> 
> On Thu, Jun 4, 2015 at 9:06 AM, Matthias J. Sax <
> mjsax@informatik.hu-berlin.de> wrote:
> 
>> I think, people should be forced to fixed failing tests asap. One way to
>> go, could be to lock the master branch until the test is fixed. If
>> nobody can push to the master, pressure is very high for the responsible
>> developer to get it done asap. Not sure if this is Apache compatible.
>> 
>> Just a thought (from industry experience).
>> 
>> 
>> On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
>>> I tend to agree with Ufuk, although it would be nice to fix them very
>> quickly.
>>> 
>>> On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org> wrote:
>>>> @matthias: That is the implicit policy right now. Seems not to work...
>>>> 
>>>> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
>>>> mjsax@informatik.hu-berlin.de> wrote:
>>>> 
>>>>> I basically agree that the current policy on not optimal. However, I
>>>>> would rather give failing tests "top priority" to get fixed (if
>> possible
>>>>> within one/a-few days) and not disable them.
>>>>> 
>>>>> -Matthias
>>>>> 
>>>>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
>>>>>> Hey all,
>>>>>> 
>>>>>> we have certain test cases, which are failing regularly on Travis. In
>> all
>>>>>> cases I can think of we just keep the test activated.
>>>>>> 
>>>>>> I think this makes it very hard for regular contributors to take these
>>>>>> failures seriously. I think the following situation is not unrealistic
>>>>> with
>>>>>> the current policy: I know that test X is failing. I don't know that
>>>>> person
>>>>>> Y fixed this test. I see test X failing (again for a different reason)
>>>>> and
>>>>>> think that it is a "known issue".
>>>>>> 
>>>>>> I think a better policy is to just disable the test, assign someone to
>>>>> fix
>>>>>> it, and then only enable it again after someone has fixed it.
>>>>>> 
>>>>>> Is this reasonable? Or do we have good reasons to keep such tests
>> (there
>>>>>> are currently one or two) activated?
>>>>>> 
>>>>>> – Ufuk
>>>>>> 
>>>>> 
>>>>> 
>>> 
>> 
>> 


Re: Failing tests policy

Posted by Till Rohrmann <ti...@gmail.com>.
I'm also in favour of quickly fixing the failing test cases but I think
that blocking the master is a kind of drastic measure. IMO this creates a
culture of blaming someone whereas I would prefer a more proactive
approach. When you see a failing test case and know that someone recently
worked on it, then ping him because maybe he can quickly fix it or knows
about it. If he's not available, e.g. holidays, busy with other stuff,
etc., then maybe one can investigate the problem oneself and fix it.

But this is basically our current approach and I don't know how to enforce
this policy by some means. Maybe it's making people more aware of it and
motivating people to have a stable master.

Cheers,
Till

On Thu, Jun 4, 2015 at 9:06 AM, Matthias J. Sax <
mjsax@informatik.hu-berlin.de> wrote:

> I think, people should be forced to fixed failing tests asap. One way to
> go, could be to lock the master branch until the test is fixed. If
> nobody can push to the master, pressure is very high for the responsible
> developer to get it done asap. Not sure if this is Apache compatible.
>
> Just a thought (from industry experience).
>
>
> On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
> > I tend to agree with Ufuk, although it would be nice to fix them very
> quickly.
> >
> > On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org> wrote:
> >> @matthias: That is the implicit policy right now. Seems not to work...
> >>
> >> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
> >> mjsax@informatik.hu-berlin.de> wrote:
> >>
> >>> I basically agree that the current policy on not optimal. However, I
> >>> would rather give failing tests "top priority" to get fixed (if
> possible
> >>> within one/a-few days) and not disable them.
> >>>
> >>> -Matthias
> >>>
> >>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
> >>>> Hey all,
> >>>>
> >>>> we have certain test cases, which are failing regularly on Travis. In
> all
> >>>> cases I can think of we just keep the test activated.
> >>>>
> >>>> I think this makes it very hard for regular contributors to take these
> >>>> failures seriously. I think the following situation is not unrealistic
> >>> with
> >>>> the current policy: I know that test X is failing. I don't know that
> >>> person
> >>>> Y fixed this test. I see test X failing (again for a different reason)
> >>> and
> >>>> think that it is a "known issue".
> >>>>
> >>>> I think a better policy is to just disable the test, assign someone to
> >>> fix
> >>>> it, and then only enable it again after someone has fixed it.
> >>>>
> >>>> Is this reasonable? Or do we have good reasons to keep such tests
> (there
> >>>> are currently one or two) activated?
> >>>>
> >>>> – Ufuk
> >>>>
> >>>
> >>>
> >
>
>

Re: Failing tests policy

Posted by "Matthias J. Sax" <mj...@informatik.hu-berlin.de>.
I think, people should be forced to fixed failing tests asap. One way to
go, could be to lock the master branch until the test is fixed. If
nobody can push to the master, pressure is very high for the responsible
developer to get it done asap. Not sure if this is Apache compatible.

Just a thought (from industry experience).


On 06/04/2015 08:10 AM, Aljoscha Krettek wrote:
> I tend to agree with Ufuk, although it would be nice to fix them very quickly.
> 
> On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org> wrote:
>> @matthias: That is the implicit policy right now. Seems not to work...
>>
>> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
>> mjsax@informatik.hu-berlin.de> wrote:
>>
>>> I basically agree that the current policy on not optimal. However, I
>>> would rather give failing tests "top priority" to get fixed (if possible
>>> within one/a-few days) and not disable them.
>>>
>>> -Matthias
>>>
>>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
>>>> Hey all,
>>>>
>>>> we have certain test cases, which are failing regularly on Travis. In all
>>>> cases I can think of we just keep the test activated.
>>>>
>>>> I think this makes it very hard for regular contributors to take these
>>>> failures seriously. I think the following situation is not unrealistic
>>> with
>>>> the current policy: I know that test X is failing. I don't know that
>>> person
>>>> Y fixed this test. I see test X failing (again for a different reason)
>>> and
>>>> think that it is a "known issue".
>>>>
>>>> I think a better policy is to just disable the test, assign someone to
>>> fix
>>>> it, and then only enable it again after someone has fixed it.
>>>>
>>>> Is this reasonable? Or do we have good reasons to keep such tests (there
>>>> are currently one or two) activated?
>>>>
>>>> – Ufuk
>>>>
>>>
>>>
> 


Re: Failing tests policy

Posted by Aljoscha Krettek <al...@apache.org>.
I tend to agree with Ufuk, although it would be nice to fix them very quickly.

On Thu, Jun 4, 2015 at 1:26 AM, Stephan Ewen <se...@apache.org> wrote:
> @matthias: That is the implicit policy right now. Seems not to work...
>
> On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
> mjsax@informatik.hu-berlin.de> wrote:
>
>> I basically agree that the current policy on not optimal. However, I
>> would rather give failing tests "top priority" to get fixed (if possible
>> within one/a-few days) and not disable them.
>>
>> -Matthias
>>
>> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
>> > Hey all,
>> >
>> > we have certain test cases, which are failing regularly on Travis. In all
>> > cases I can think of we just keep the test activated.
>> >
>> > I think this makes it very hard for regular contributors to take these
>> > failures seriously. I think the following situation is not unrealistic
>> with
>> > the current policy: I know that test X is failing. I don't know that
>> person
>> > Y fixed this test. I see test X failing (again for a different reason)
>> and
>> > think that it is a "known issue".
>> >
>> > I think a better policy is to just disable the test, assign someone to
>> fix
>> > it, and then only enable it again after someone has fixed it.
>> >
>> > Is this reasonable? Or do we have good reasons to keep such tests (there
>> > are currently one or two) activated?
>> >
>> > – Ufuk
>> >
>>
>>

Re: Failing tests policy

Posted by Stephan Ewen <se...@apache.org>.
@matthias: That is the implicit policy right now. Seems not to work...

On Thu, Jun 4, 2015 at 12:40 AM, Matthias J. Sax <
mjsax@informatik.hu-berlin.de> wrote:

> I basically agree that the current policy on not optimal. However, I
> would rather give failing tests "top priority" to get fixed (if possible
> within one/a-few days) and not disable them.
>
> -Matthias
>
> On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
> > Hey all,
> >
> > we have certain test cases, which are failing regularly on Travis. In all
> > cases I can think of we just keep the test activated.
> >
> > I think this makes it very hard for regular contributors to take these
> > failures seriously. I think the following situation is not unrealistic
> with
> > the current policy: I know that test X is failing. I don't know that
> person
> > Y fixed this test. I see test X failing (again for a different reason)
> and
> > think that it is a "known issue".
> >
> > I think a better policy is to just disable the test, assign someone to
> fix
> > it, and then only enable it again after someone has fixed it.
> >
> > Is this reasonable? Or do we have good reasons to keep such tests (there
> > are currently one or two) activated?
> >
> > – Ufuk
> >
>
>

Re: Failing tests policy

Posted by "Matthias J. Sax" <mj...@informatik.hu-berlin.de>.
I basically agree that the current policy on not optimal. However, I
would rather give failing tests "top priority" to get fixed (if possible
within one/a-few days) and not disable them.

-Matthias

On 06/04/2015 12:32 AM, Ufuk Celebi wrote:
> Hey all,
> 
> we have certain test cases, which are failing regularly on Travis. In all
> cases I can think of we just keep the test activated.
> 
> I think this makes it very hard for regular contributors to take these
> failures seriously. I think the following situation is not unrealistic with
> the current policy: I know that test X is failing. I don't know that person
> Y fixed this test. I see test X failing (again for a different reason) and
> think that it is a "known issue".
> 
> I think a better policy is to just disable the test, assign someone to fix
> it, and then only enable it again after someone has fixed it.
> 
> Is this reasonable? Or do we have good reasons to keep such tests (there
> are currently one or two) activated?
> 
> – Ufuk
>