You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@beam.apache.org by Łukasz Gajowy <lu...@gmail.com> on 2018/05/16 13:54:06 UTC

Performance Testing Dashboard - which results should be official?

Hi all,

I created an issue which I believe is interesting in terms of what should
be included in the Performance Testing dashboard and what shouldn't.
Speaking more generally, we have to settle which results should be treated
as official ones. The issue description contains my idea of solving it, but
I might miss something there. If you're interested in this topic and
willing to contribute you're welcome to!

Issue link: https://issues.apache.org/jira/browse/BEAM-4298

(please note that there's a related issue linked)


Best regards,
Łukasz Gajowy

Re: Performance Testing Dashboard - which results should be official?

Posted by Łukasz Gajowy <lu...@gmail.com>.
That is correct - I asked for purely organizational purposes. Please keep
in mind that there is still some work to do in terms of getting rid of some
test flakiness, properly building the test code before running the tests
and detecting the anomalies/regressions that happen in IOs. We're working
on it and will inform the community when it's done.

Thank you for all the comments so far!

2018-05-16 23:11 GMT+02:00 Kenneth Knowles <kl...@google.com>:

> Commented on the JIRA. I think this topic isn't so much about
> runner-to-runner comparison but just getting organized. For me working on a
> particular runner or IO or DSL the results are very helpful for seeing
> trends over time.
>
> On Wed, May 16, 2018 at 7:05 AM Jean-Baptiste Onofré <jb...@nanthrax.net>
> wrote:
>
>> Hi Lukasz,
>>
>> Thanks, gonna comment in the Jira.
>>
>> Generally speaking, I'm not a big fan to compare a runner versus
>> another, because there are bunch of parameters that can influence the
>> results.
>>
>> Regards
>> JB
>>
>> On 16/05/2018 15:54, Łukasz Gajowy wrote:
>> > Hi all,
>> >
>> > I created an issue which I believe is interesting in terms of what
>> > should be included in the Performance Testing dashboard and what
>> > shouldn't. Speaking more generally, we have to settle which
>> > results should be treated as official ones. The issue description
>> > contains my idea of solving it, but I might miss something there. If
>> > you're interested in this topic and willing to contribute you're
>> welcome
>> > to!
>> >
>> > Issue link: https://issues.apache.org/jira/browse/BEAM-4298
>> >
>> > (please note that there's a related issue linked)
>> >
>> >
>> > Best regards,
>> > Łukasz Gajowy
>>
>

Re: Performance Testing Dashboard - which results should be official?

Posted by Kenneth Knowles <kl...@google.com>.
Commented on the JIRA. I think this topic isn't so much about
runner-to-runner comparison but just getting organized. For me working on a
particular runner or IO or DSL the results are very helpful for seeing
trends over time.

On Wed, May 16, 2018 at 7:05 AM Jean-Baptiste Onofré <jb...@nanthrax.net>
wrote:

> Hi Lukasz,
>
> Thanks, gonna comment in the Jira.
>
> Generally speaking, I'm not a big fan to compare a runner versus
> another, because there are bunch of parameters that can influence the
> results.
>
> Regards
> JB
>
> On 16/05/2018 15:54, Łukasz Gajowy wrote:
> > Hi all,
> >
> > I created an issue which I believe is interesting in terms of what
> > should be included in the Performance Testing dashboard and what
> > shouldn't. Speaking more generally, we have to settle which
> > results should be treated as official ones. The issue description
> > contains my idea of solving it, but I might miss something there. If
> > you're interested in this topic and willing to contribute you're welcome
> > to!
> >
> > Issue link: https://issues.apache.org/jira/browse/BEAM-4298
> >
> > (please note that there's a related issue linked)
> >
> >
> > Best regards,
> > Łukasz Gajowy
>

Re: Performance Testing Dashboard - which results should be official?

Posted by Jean-Baptiste Onofré <jb...@nanthrax.net>.
Hi Lukasz,

Thanks, gonna comment in the Jira.

Generally speaking, I'm not a big fan to compare a runner versus 
another, because there are bunch of parameters that can influence the 
results.

Regards
JB

On 16/05/2018 15:54, Łukasz Gajowy wrote:
> Hi all,
> 
> I created an issue which I believe is interesting in terms of what 
> should be included in the Performance Testing dashboard and what 
> shouldn't. Speaking more generally, we have to settle which 
> results should be treated as official ones. The issue description 
> contains my idea of solving it, but I might miss something there. If 
> you're interested in this topic and willing to contribute you're welcome 
> to!
> 
> Issue link: https://issues.apache.org/jira/browse/BEAM-4298
> 
> (please note that there's a related issue linked)
> 
> 
> Best regards,
> Łukasz Gajowy