You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jmeter.apache.org by Antonio Gomes Rodrigues <ra...@gmail.com> on 2021/11/02 20:58:10 UTC

Re: Move "precise throughput computation" to thread group

Hi Vladimir,

It looks good, I will test it asap

Do you plan to add a feature to load/save events fired time (seed) to have
exactly the same repeatable sequence for each test execution?

Le dim. 31 oct. 2021 à 21:01, Vladimir Sitnikov <si...@gmail.com>
a écrit :

> I wanted to add a chart with "desired load profile", and I found
> https://github.com/JetBrains/lets-plot charting library.
> The screenshot is in the PR
>
> It adds some jars to the distribution, however, I think it would be hard to
> find another library.
> The unfortunate consequence is that UI dependencies become "core"
> dependencies. I've no idea what to do with that,
> and we might probably want to move UI classes to core-ui module or
> something like that.
>
> The current dependency diff is as follows:
>
>     54852334 => 71166807 bytes (+16314473 bytes)
>     99 => 134 files (+35)
>
>   +   17536 annotations-13.0.jar
>   -   18538 annotations-16.0.2.jar
>   +  507197 base-portable-jvm-2.1.0.jar
>   +  485809 batik-anim-1.14.jar
>   +  424624 batik-awt-util-1.14.jar
>   +  703757 batik-bridge-1.14.jar
>   +  112373 batik-codec-1.14.jar
>   +    8433 batik-constants-1.14.jar
>   +  330318 batik-css-1.14.jar
>   +  184487 batik-dom-1.14.jar
>   +   10238 batik-ext-1.14.jar
>   +  192087 batik-gvt-1.14.jar
>   +   11466 batik-i18n-1.14.jar
>   +   76875 batik-parser-1.14.jar
>   +   25876 batik-script-1.14.jar
>   +    6663 batik-shared-resources-1.14.jar
>   +  232734 batik-svg-dom-1.14.jar
>   +  227514 batik-svggen-1.14.jar
>   +  129300 batik-transcoder-1.14.jar
>   +  127477 batik-util-1.14.jar
>   +   33866 batik-xml-1.14.jar
>   +   32033 kotlin-logging-jvm-2.0.5.jar
>   + 2993765 kotlin-reflect-1.5.21.jar
>   + 1505952 kotlin-stdlib-1.5.31.jar
>   +  198322 kotlin-stdlib-common-1.5.31.jar
>   +   22986 kotlin-stdlib-jdk7-1.5.31.jar
>   +   16121 kotlin-stdlib-jdk8-1.5.31.jar
>   +  792176 kotlinx-html-jvm-0.7.3.jar
>   +  196707 lets-plot-batik-2.1.0.jar
>   + 3593892 lets-plot-common-2.1.0.jar
>   +  627895 plot-api-jvm-3.0.2.jar
>   +  882509 plot-base-portable-jvm-2.1.0.jar
>   +  792534 plot-builder-portable-jvm-2.1.0.jar
>   +  115007 plot-common-portable-jvm-2.1.0.jar
>   +  454864 plot-config-portable-jvm-2.1.0.jar
>   +  173932 vis-svg-portable-jvm-2.1.0.jar
>   +   85686 xml-apis-ext-1.3.04.jar
>
> Vladimir
>

Re: Move "precise throughput computation" to thread group

Posted by Felix Schumacher <fe...@internetallee.de>.
I would be OK with addition of Kotlin to the core, when we are certain
(and are on the same side that) JMeter will still be usable with
"normal" Java (that seems to be the case, as I read it)

But it is yet another language added to our growing list of languages
(Java, Groovy, Ant, Gradle, JS, XSLT, XML, Velocity, Markdown, HTML, ...)

Felix

Am 25.11.21 um 10:06 schrieb Vladimir Sitnikov:
> Philippe. All,
>
> Are there blockers on including Kotlin for main and test code in JMeter?
>
> I proposed Kotlin-based implementation for OpenModelThreadGroup almost a
> month ago,
> and it looks like Kotlin is the only open question left.
>
> I do not think I have seen opinions like "somebody quitting JMeter
> community because of Kotlin"
> I do not think I have seen opinions like "somebody stopping JMeter
> maintenance because of Kotlin".
> In theory, such people might exist, however, we can never make
> everybody happy, especially those who never speak up.
>
> Can we accept the use of Kotlin for the main codebase and move forward?
>
> Vladimir
>


Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
I've merged Open Model Thread Group.
Thank you everybody for the review.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Thank you.

>JMeter will still be usable with
>"normal" Java (that seems to be the case, as I read it)

By default, Kotlin code is usable from Java including reflection, and so on.
I've added explicitApi mode, so "default" visibility modifier in Kotlin
code becomes an error.
It would enforce developers to decide if the class or method is public or
not.

It is yet another small feature that is absent in Java.

>But it is yet another language added to our growing list of languages
>(Java, Groovy, Ant, Gradle, JS, XSLT, XML, Velocity, Markdown, HTML, ...)

We use Kotlin DSL for Gradle which replaced almost all the Ant tasks.

I suggest we replace Groovy tests with Kotlin.
Currently, we have Groovy 42 tests, so replacement should not be that
complicated.
As a side-effect, it would enable us to remove Spock (and JUnit4!) from the
test classpath.

What do you think?

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Philippe Mouawad <ph...@gmail.com>.
Hello,

I have the same position as Felix.

Regards

On Thu, Nov 25, 2021 at 10:07 AM Vladimir Sitnikov <
sitnikov.vladimir@gmail.com> wrote:

> Philippe. All,
>
> Are there blockers on including Kotlin for main and test code in JMeter?
>
> I proposed Kotlin-based implementation for OpenModelThreadGroup almost a
> month ago,
> and it looks like Kotlin is the only open question left.
>
> I do not think I have seen opinions like "somebody quitting JMeter
> community because of Kotlin"
> I do not think I have seen opinions like "somebody stopping JMeter
> maintenance because of Kotlin".
> In theory, such people might exist, however, we can never make
> everybody happy, especially those who never speak up.
>
> Can we accept the use of Kotlin for the main codebase and move forward?
>
> Vladimir
>


-- 
Cordialement.
Philippe Mouawad.

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Philippe. All,

Are there blockers on including Kotlin for main and test code in JMeter?

I proposed Kotlin-based implementation for OpenModelThreadGroup almost a
month ago,
and it looks like Kotlin is the only open question left.

I do not think I have seen opinions like "somebody quitting JMeter
community because of Kotlin"
I do not think I have seen opinions like "somebody stopping JMeter
maintenance because of Kotlin".
In theory, such people might exist, however, we can never make
everybody happy, especially those who never speak up.

Can we accept the use of Kotlin for the main codebase and move forward?

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
I think the code is ready for review and merge.
I am inclined to rename the controller to "Open Model Requests" (see below)

* validation mode works now for openmodel
* unit tests for schedule generators are there
* even_arrivals is supported as well (for both constant and
increasing/decreasing loads)
*.e2e tests are missing, however, I'm inclined to add them with a test DSL


The key missing bits from my point of view are:
* consensus on Kotlin
*  screenshot for the user manual

I have not added the screenshot as there's not much feedback yet.

----

As I thought on "thread limit" more, it might be that "Open Model Thread
Group" should be named like
"Open Model Virtual User Group".
Technically speaking, the users should not care much about the number of
(hardware?) threads.
For instance, if we use "Loom virtual threads" or "coroutines", then "the
number of threads" becomes moot.

The end-user-facing properties are:
* "max number of concurrent in-flight requests" == max number of active
"virtual users"
* "max number of open sessions/connections" == total number of "virtual
users", including both active and inactive.

So it might make sense to remove "Thread Group" from the name and replace
it with something else.
For instance:
* Open Model Requests  <-- I am inclined to this one
* Open Model Virtual Users

WDYT?

I inline that "Open Model" should have a knob for "max number of concurrent
in-flight requests" (== max request concurrency)

It might be that "Open Model" should NOT deal with "sessions/connection
pools".
So it might be that "same user on each iteration", "different user on each
iteration" or even "new user in 30% of iterations"
should be configured as a test plan element.
For instance:
a) Flow Control Action that configures probability for "resetting the user"
b) "user pool" configuration element that would borrow a user from a pool,
so JMeter could emulate case when user makes a big pause
and continues scenarios after 10min of idle time.  The thing is that
"connection pool", "cookie pool" seem to be way less memory consuming than
full JMeter test plan after cloning, so having connection pools would scale
better than trying to keep lots of cloned test plans in idle state.


Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Upd:

I've added the relevant tests and CI now passes.
I've updated the build configuration, so the build now passes without
warnings with Gradle 7.3 and Java 17.

SecurityManager has been deprecated for removal, so we might need to drop
its use.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
I renamed the thread group to "Open Model Thread Group" as it seems to be
the best title among the suggestions.
I added code comments.
Review is welcome. I've no idea why GitHub CI hangs, and I have not
explored it yet.

So far there's a request from Philippe regarding Kotlin removal.

@Philippe, do you veto the use of Kotlin?

I would like to refrain from debates and discussions since most of the
discussions are theoretical.
I suggest we just try it out and see how it goes.

Please do not forget there are features that require Kotlin, so we can
hardly "veto the use of Kotlin in JMeter":
* Programmatic test plan DSL
* Coroutine-based async samplers and virtual users
* Jetpack Compose UI

I am sure "programmatic test plan DSL" would help a lot for creating test
code for JMeter itself.

----

I would finish the following items soon, and then the PR will be ready
for merging:
* Force thread termination on "schedule finish". I think the last "pause"
is not accounted for right now
* Add more tests (some of the test use println, and it should not be like
that)
* Do something with even_arrivals(...). Currently, only random_arrivals is
implemented, and I would either postpone even_arrivals to the later
releases or implement it now.
* Implement "validation mode" (==run thread group with single iteration).
By the way, I've got a suggestion that a schedule of "1" should yield "one
execution of a single thread" just like in "validation mode". I'm not sure
if that is intuitive configuration, however, it more-or-less aligns with
the old thread group where you can just set 1 everywhere and get "one
sample".

Things I would like to postpone for the later releases:
* "same user on each iteration" vs "new user on each iteration"
* "loop count"
* "max threads"

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Vincent>Usually I do not model the load in urls/sec but in full scenarios
(from
Vincent>login to logout) per hour in high activity.

Ah, it looks like you misunderstand my proposal.
The proposed "Precise Thread Group" measures workload rate in scenarios
rather than "URLs or samplers".
It does not care how many samplers are there in the thread group, and what
it does is it ensures the number of "execution launches"
matches the configured rate.

Vincent>200 new document creations per hour + 400 document searches per
hour + 250
Vincent>document modifications / hour.

Let me put some examples:

1) If you have exact rates for scenarios, then you could configure 3 thread
groups with the required rates.
Note that "document creation scenario" could have an arbitrary number of
samples inside, and the rate is per scenario rather than "per sample".

Precise Thread Group: schedule="rate(200/hour) random_arrivals(30 min)"
   - "document creation scenario"
Precise Thread Group: schedule="rate(400/hour) random_arrivals(30 min)"
   - "document search scenario"
Precise Thread Group: schedule="rate(250/hour) random_arrivals(30 min)"
   - "document modification scenario"

2) On the other hand, if you do not have specific rates, but you want to
have a single "850 scenarios per hour" + percentages,
you could use "throughput controller" or something like that to divide 850
into the three scenarios.
Suppose the breakdown is 200:400:250 == 24% : 47% : 29%, and the total rate
is 200+400+250 == 850 per hour

The test could be:

Precise Thread Group: schedule="rate(850/hour) random_arrivals(30 min)"
  - Throughput Controller: probabiltity=24%
      - "document creation scenario"
  - Throughput Controller: probabiltity=47%
      - "document search scenario"
  - Throughput Controller: probabiltity=29%
      - "document modification scenario"

Both options would generate the desired "200 + 400 + 250" per hour
workload, and I do not see why would you like to use pacing.

>I prefer to use the concept of pacing which is simpler and which allows
>easily to make load with step (100% load 1h then 150% load 1h)

I do not see why do you think "pacing" makes the test easier. Where does it
help exactly?

You can launch JMeter multiple times with different parameters (that is
what I do because I have a runner for that),
or you could configure Precise Thread Group to do several steps of load
like "rate(850/hour) random_arrivals(30 min) pause(0) rate(1700/hour)
random_arrivals(30 min)"

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vincent Daburon <vd...@gmail.com>.
Hi,

>






*Vincent mentions the pacing feature, however, I believe it is really
reallyclose to my current proposal as well.For instance, "pacing of 1 min"
means exactly the same as configuring "1request per minute".In other words,
if you configure the new thread group to "1 request perminute", it would
spawn exactly one request every minute (on average).If you configure "0.5
requests per minute" you get the same thing as"pacing 2 min".*

When I model the load for the test, I reason in the theme of scenarios
played per hour.

Can the hardware architecture support for example :
200 new document creations per hour + 400 document searches per hour + 250
document modifications / hour.

or for a merchant site :
1000 article consultations per hour + 1500 product searches per hour + 80
purchases per hour

Usually I do not model the load in urls/sec but in full scenarios (from
login to logout) per hour in high activity.

Navigation scenarios can have alternative paths like searching which does
not return any results.
Modeling with a number of hits /sec or request/min is not easy.

I prefer to use the concept of pacing which is simpler and which allows
easily to make load with step (100% load 1h then 150% load 1h)

Regards.
Vincent DAB.



Le mar. 9 nov. 2021 à 09:41, Vladimir Sitnikov <si...@gmail.com>
a écrit :

> Mariusz>also discussion about JMeter threading model and constraints
>
> Frankly speaking, this time I just implemented the thread group the way I
> wanted, and I did zero research on the past requests.
> Thanks for the links above, and I think they discuss the very similar
> problem and I think Kirk would appreciate the new thread group.
> However, I'm not sure if Kirk uses JMeter often nowadays :-/
>
> A "fun" discovery is my own mail on exactly the same issue from 2015:
> https://lists.apache.org/thread/2mnjf3vv94ykc6zlk2qzmkt43z5rxbb0
> I've no idea why I dropped the idea then, however, the issue is exactly the
> same as I resolve now.
>
> However, that idea was more involved and it included the proposal to
> distinguish "start of test" from "preparation steps in each thread".
>
> ---
>
> There are lots of questions like "how many threads do I need", "why the
> load does not match expectations".
> For instance,
> https://lists.apache.org/thread/syjbkcxk3cvp2c1g9hsxm8jcwc8776w1
>
> There are lots of questions like "how do I configure X requests per
> minute".
>
> Both topics are addressed by my current thread group.
>
> ---
>
> Vincent mentions the pacing feature, however, I believe it is really really
> close to my current proposal as well.
> For instance, "pacing of 1 min" means exactly the same as configuring "1
> request per minute".
> In other words, if you configure the new thread group to "1 request per
> minute", it would spawn exactly one request every minute (on average).
> If you configure "0.5 requests per minute" you get the same thing as
> "pacing 2 min".
>
> Currently "schedule string" contains "rate(X/min)", "radom_arrivals(X
> min)", "even_arrivals(X min)", "pause(X min)" calls.
> One can achieve "pacing" feature via "rate(${1/pacing}/min)".
> Of course, I can add "pacing(X min)" macro that would be converted to
> "rate(1/X)" in the internal representation, however,
> I am not convinced it is worth adding taking into account it would make it
> harder for the users to pick the right tool (they would have to choose
> between rate and pacing).
>
> I think 99.42% of all SLAs were like "number of users", "max number of
> concurrent requests", "number of requests per second".
> I do not think I saw a non-functional requirement like "single thread must
> not issue requests closer than X seconds".
>
> In other words, I think I understand what pacing means (thanks to Vincent's
> explanation and pictures), however, I do not see which business
> requirements
> make "pacing" easier to configure and use in reporting to the management.
>
> The bad thing with pacing implementation, it would coordinate requests from
> different thread groups magnifying coordinated omission issues.
>
> I would suggest configuring the request rate.
>
> Vladimir
>

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Mariusz>also discussion about JMeter threading model and constraints

Frankly speaking, this time I just implemented the thread group the way I
wanted, and I did zero research on the past requests.
Thanks for the links above, and I think they discuss the very similar
problem and I think Kirk would appreciate the new thread group.
However, I'm not sure if Kirk uses JMeter often nowadays :-/

A "fun" discovery is my own mail on exactly the same issue from 2015:
https://lists.apache.org/thread/2mnjf3vv94ykc6zlk2qzmkt43z5rxbb0
I've no idea why I dropped the idea then, however, the issue is exactly the
same as I resolve now.

However, that idea was more involved and it included the proposal to
distinguish "start of test" from "preparation steps in each thread".

---

There are lots of questions like "how many threads do I need", "why the
load does not match expectations".
For instance,
https://lists.apache.org/thread/syjbkcxk3cvp2c1g9hsxm8jcwc8776w1

There are lots of questions like "how do I configure X requests per minute".

Both topics are addressed by my current thread group.

---

Vincent mentions the pacing feature, however, I believe it is really really
close to my current proposal as well.
For instance, "pacing of 1 min" means exactly the same as configuring "1
request per minute".
In other words, if you configure the new thread group to "1 request per
minute", it would spawn exactly one request every minute (on average).
If you configure "0.5 requests per minute" you get the same thing as
"pacing 2 min".

Currently "schedule string" contains "rate(X/min)", "radom_arrivals(X
min)", "even_arrivals(X min)", "pause(X min)" calls.
One can achieve "pacing" feature via "rate(${1/pacing}/min)".
Of course, I can add "pacing(X min)" macro that would be converted to
"rate(1/X)" in the internal representation, however,
I am not convinced it is worth adding taking into account it would make it
harder for the users to pick the right tool (they would have to choose
between rate and pacing).

I think 99.42% of all SLAs were like "number of users", "max number of
concurrent requests", "number of requests per second".
I do not think I saw a non-functional requirement like "single thread must
not issue requests closer than X seconds".

In other words, I think I understand what pacing means (thanks to Vincent's
explanation and pictures), however, I do not see which business requirements
make "pacing" easier to configure and use in reporting to the management.

The bad thing with pacing implementation, it would coordinate requests from
different thread groups magnifying coordinated omission issues.

I would suggest configuring the request rate.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Mariusz W <ma...@gmail.com>.
I think, that current Thread Group is natively more closed model than open.
To make it more open-friendly the "delay thread creation" is usable but it
is not perfect solution. This Thread Group has no any build in feature to
create changing (in time) load characteristics.
Thread Group which is build by Vladimir has the feature to change arrival
rate in time. Another features may be added to make it more powerful e.g.
fixed thread count, concurrency limit as in "Arrivals thread group"
https://jmeter-plugins.org/wiki/ArrivalsThreadGroup/ or others. Having
build-in new thread group may be great feature for testers. Thread Groups
in plugins are ok, but not all corpo allow to download plugins :( . Not all
corpo migrate to java 17 also. So, ihmo, it is better to use kotlin and
java 8+ than migrate to java 17 quickly. Migration to the new java version
will also be needed (and should be planned), but I think first to version
11.


ps.
I recommend some old discussions about the generated load with JMeter (also
discussion about JMeter threading model and constraints):
https://markmail.org/thread/5xjpan3u5vtz47zg
https://markmail.org/thread/ne27yq3o7r34zhdt


Regards,
Mariusz

On Mon, 8 Nov 2021 at 10:41, Antonio Gomes Rodrigues <ra...@gmail.com>
wrote:

> Hi
>
> In my opinion, we need to keep the old thread group to allow us to simulate
> open and closed models and because some time we need to simulate X users
> and not only X action.
>
> About Kotlin, I don't have an opinion.
>
> Le dim. 7 nov. 2021 à 18:16, Vincent Daburon <vd...@gmail.com> a écrit
> :
>
> > Hi,
> > For me JMeter is only in java for the main functionalities including
> > the groups of threads and I do not appreciate that we add kotlin
> > language to the project.
> > First line of the Jmeter overview :  “ The Apache JMeter™ application
> > is open source software, a 100% pure Java application “
> >
> > Adding so many additional libraries just to display graphs lines seems
> > disproportionate to me.
> >
> >  Personally I never use the throughput computation functionalities
> > because it is quite difficult to understand, it does not work when
> > there are optional calls (IF).
> >
> > I use the notion of PACING which is much easier and which exists in
> > other performance testing tools.
> > The pacing is the minimum duration before doing a new iteration.
> > With the pacing there is a calculation to add a dynamic wait pause in
> > order to reach the duration of the pacing.
> > Eg : Pacing 2min, iteration duration until last sampler 1min45sec, the
> > dynamic wait pause will be 15 sec.
> >
> > For me this feature is too impacting on current developments (kotlin,
> libs
> > SVG).
> >
> > I don't think this feature should be added in this state in the code of
> > JMeter
> >
> > The classic Thread group should not be deprecated either because it
> > suits me for my needs.
> >
> > Regards.
> >
> > Vincent DABURON
> >
> > JMeter user since 2004 and testing professional
> >
> > Le dim. 7 nov. 2021 à 15:45, Vladimir Sitnikov
> > <si...@gmail.com> a écrit :
> > >
> > > >Why not call it « Open Model thread group »  instead of precise
> > throughput
> > > >thread group?
> > >
> > > Naming is hard, and I have no idea of the proper name. Suggestions are
> > > welcome.
> > > I used "precise thread group" just to pick a name and get the thing
> > running.
> > >
> > > I think "open model thread group" is not exactly right since after
> > > thread(N) addition the thread group
> > > is no longer "open model".
> > > On the other hand, a sufficiently large number of threads in "closed
> > model"
> > > is not really different from "open model",
> > > so the existing thread groups are "open" as well if user configures big
> > > enough threat counts.
> > >
> > > >Precise is a bit weird as it would mean others are not.
> > >
> > > The thing is the new group generates the accurate load in terms of the
> > > number of samples.
> > > For instance, if you configure rate(10/min) random_arrivals(1 min),
> then
> > > you get exactly 10 samples.
> > >
> > > However, I agree "precise thread group" sounds weird.
> > >
> > > Vladimir
> >
>

Re: Move "precise throughput computation" to thread group

Posted by Antonio Gomes Rodrigues <ra...@gmail.com>.
Hi

In my opinion, we need to keep the old thread group to allow us to simulate
open and closed models and because some time we need to simulate X users
and not only X action.

About Kotlin, I don't have an opinion.

Le dim. 7 nov. 2021 à 18:16, Vincent Daburon <vd...@gmail.com> a écrit :

> Hi,
> For me JMeter is only in java for the main functionalities including
> the groups of threads and I do not appreciate that we add kotlin
> language to the project.
> First line of the Jmeter overview :  “ The Apache JMeter™ application
> is open source software, a 100% pure Java application “
>
> Adding so many additional libraries just to display graphs lines seems
> disproportionate to me.
>
>  Personally I never use the throughput computation functionalities
> because it is quite difficult to understand, it does not work when
> there are optional calls (IF).
>
> I use the notion of PACING which is much easier and which exists in
> other performance testing tools.
> The pacing is the minimum duration before doing a new iteration.
> With the pacing there is a calculation to add a dynamic wait pause in
> order to reach the duration of the pacing.
> Eg : Pacing 2min, iteration duration until last sampler 1min45sec, the
> dynamic wait pause will be 15 sec.
>
> For me this feature is too impacting on current developments (kotlin, libs
> SVG).
>
> I don't think this feature should be added in this state in the code of
> JMeter
>
> The classic Thread group should not be deprecated either because it
> suits me for my needs.
>
> Regards.
>
> Vincent DABURON
>
> JMeter user since 2004 and testing professional
>
> Le dim. 7 nov. 2021 à 15:45, Vladimir Sitnikov
> <si...@gmail.com> a écrit :
> >
> > >Why not call it « Open Model thread group »  instead of precise
> throughput
> > >thread group?
> >
> > Naming is hard, and I have no idea of the proper name. Suggestions are
> > welcome.
> > I used "precise thread group" just to pick a name and get the thing
> running.
> >
> > I think "open model thread group" is not exactly right since after
> > thread(N) addition the thread group
> > is no longer "open model".
> > On the other hand, a sufficiently large number of threads in "closed
> model"
> > is not really different from "open model",
> > so the existing thread groups are "open" as well if user configures big
> > enough threat counts.
> >
> > >Precise is a bit weird as it would mean others are not.
> >
> > The thing is the new group generates the accurate load in terms of the
> > number of samples.
> > For instance, if you configure rate(10/min) random_arrivals(1 min), then
> > you get exactly 10 samples.
> >
> > However, I agree "precise thread group" sounds weird.
> >
> > Vladimir
>

Re: Move "precise throughput computation" to thread group

Posted by Vincent Daburon <vd...@gmail.com>.
Hi,
For me JMeter is only in java for the main functionalities including
the groups of threads and I do not appreciate that we add kotlin
language to the project.
First line of the Jmeter overview :  “ The Apache JMeter™ application
is open source software, a 100% pure Java application “

Adding so many additional libraries just to display graphs lines seems
disproportionate to me.

 Personally I never use the throughput computation functionalities
because it is quite difficult to understand, it does not work when
there are optional calls (IF).

I use the notion of PACING which is much easier and which exists in
other performance testing tools.
The pacing is the minimum duration before doing a new iteration.
With the pacing there is a calculation to add a dynamic wait pause in
order to reach the duration of the pacing.
Eg : Pacing 2min, iteration duration until last sampler 1min45sec, the
dynamic wait pause will be 15 sec.

For me this feature is too impacting on current developments (kotlin, libs SVG).

I don't think this feature should be added in this state in the code of JMeter

The classic Thread group should not be deprecated either because it
suits me for my needs.

Regards.

Vincent DABURON

JMeter user since 2004 and testing professional

Le dim. 7 nov. 2021 à 15:45, Vladimir Sitnikov
<si...@gmail.com> a écrit :
>
> >Why not call it « Open Model thread group »  instead of precise throughput
> >thread group?
>
> Naming is hard, and I have no idea of the proper name. Suggestions are
> welcome.
> I used "precise thread group" just to pick a name and get the thing running.
>
> I think "open model thread group" is not exactly right since after
> thread(N) addition the thread group
> is no longer "open model".
> On the other hand, a sufficiently large number of threads in "closed model"
> is not really different from "open model",
> so the existing thread groups are "open" as well if user configures big
> enough threat counts.
>
> >Precise is a bit weird as it would mean others are not.
>
> The thing is the new group generates the accurate load in terms of the
> number of samples.
> For instance, if you configure rate(10/min) random_arrivals(1 min), then
> you get exactly 10 samples.
>
> However, I agree "precise thread group" sounds weird.
>
> Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
>Why not call it « Open Model thread group »  instead of precise throughput
>thread group?

Naming is hard, and I have no idea of the proper name. Suggestions are
welcome.
I used "precise thread group" just to pick a name and get the thing running.

I think "open model thread group" is not exactly right since after
thread(N) addition the thread group
is no longer "open model".
On the other hand, a sufficiently large number of threads in "closed model"
is not really different from "open model",
so the existing thread groups are "open" as well if user configures big
enough threat counts.

>Precise is a bit weird as it would mean others are not.

The thing is the new group generates the accurate load in terms of the
number of samples.
For instance, if you configure rate(10/min) random_arrivals(1 min), then
you get exactly 10 samples.

However, I agree "precise thread group" sounds weird.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Philippe Mouawad <p....@ubik-ingenierie.com>.
Hello,

Why not call it « Open Model thread group »  instead of precise throughput
thread group?

Precise is a bit weird as it would mean others are not.

Regards


On Sunday, November 7, 2021, Vladimir Sitnikov <si...@gmail.com>
wrote:

> Mariusz>Vladimir, will your new thread group allow to simulate open system?
>
> Currently, the code simulates an open model, and it does not limit the
> number of threads it spawns.
>
> Vladimir
>


-- 
Cordialement
Philippe M.
Ubik-Ingenierie

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Mariusz>Vladimir, will your new thread group allow to simulate open system?

Currently, the code simulates an open model, and it does not limit the
number of threads it spawns.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Mariusz W <ma...@gmail.com>.
Hi,
Vladimir, will your new thread group allow to simulate open system? I think
about defining new request arrival at defined rate (or characteristic)
irrespectively of number of threads in system (and created by generator).
Something like described in: https://www.cs.cmu.edu/~bianca/nsdi06.pdf


If so, what thread(X) will mean in:
"
The open question is how "the number of threads" should be configured, and
I think it would go via threads(8).

For instance,
"10 threads, during 1min":
rate(100500/ms) threads(10) random_arrivals(1 min)

"start from 1 thread doing 1/sec and gradually go to 10 threads doing
100/sec in 1 min"
threads(1) rate(1/sec) random_arrivals(10 min) threads(10) rate(100/sec)"

ps. I don't looked at code yet so I don't know how it is implemented now:)
I may also not fully understand the functionality of the new component yet.

Regards,
Mariusz

On Sat, 6 Nov 2021 at 12:53, Vladimir Sitnikov <si...@gmail.com>
wrote:

> Felix>I think the old ThreadGroup has a nice and simple interface, that can
> be
> Felix>understood in a short time (my opinion :))
>
> Well, the old thread group easily configures "N threads", however, as soon
> as you need to set the desired request rate you are out of luck :-)
> One of the key questions is "how many threads do I need?", and there's no
> single answer since you don't want to over-provision and have 10000
> idle threads.
>
> Other solutions involve more intimate integration between the thread group
> and the shaping timer (see
>
> https://jmeter-plugins.org/wiki/ConcurrencyThreadGroup/#Use-With-Throughput-Shaping-Timer-Feedback-Function
> ),
> however, I would say the resulting configuration is far from being
> intuitive.
>
> Well, even placing a single timer in JMeter requires a PhD, so asking
> "thread group + timer + feedback function" does not seem to be good for UX.
>
> ----
>
> Just in case, I did consider adding something like "feedback function" to
> the existing thread group, however, it looks like
> the only use cases I have at hand are "making the test produce the desired
> load without overprovisioning the threads".
>
> In other words, integrating the load profile into the thread group covers
> all my cases and it yields easier to understand configuration.
>
>
> Felix>That should probably be run on some users and hear their feedback
>
> I'm glad you asked.
> Suggestions are welcome.
>
> I did announce the thread group on Twitter:
> https://twitter.com/VladimirSitnikv/status/1455141600213012489
> I announced it in qa_load Telegram chat (Russian-mostly, ~3800
> participants): https://t.me/qa_load/68193
>
> I got exactly 0 comments/suggestions regarding the semantics :-/
>
> The first comment I got was "please add a chart", and then someone managed
> to get what they wanted with the following config:
>
> ${__groovy(def result = "";
> for (def i=0;i<10;i++) {
>   def pattern = " random_arrivals(%ss) rate(%s/sec) random_arrivals(%ss)";
>   def step = props.get("start") + props.get("step")*i;
>   result=result+String.format(pattern\, props.get("ramp")\, step\,
> props.get("duration"));
> };
> return result;)}
>
> A good thing is that when the properties are present in jmeter.properties,
> then the UI displays the load profile even in case it is built via Groovy
> script.
>
> In case you wonder the groovy expression above evaluates with start=28,
> step=28, ramp=60, duration=300 as follows:
> random_arrivals(60s) rate(28/sec) random_arrivals(300s)
> random_arrivals(60s) rate(56/sec) random_arrivals(300s)
> random_arrivals(60s) rate(84/sec) random_arrivals(300s)
> ....
> which is
> "increase load from 0 to 28/sec during 60 sec", "hold 28/sec for 5 minutes"
> "increase load from 28/sec to 56/sec during 60sec", "hold 56/sec for 5
> minutes"
> "increase load from 56/sec to 84/sec during 60sec", "hold 84/sec for 5
> minutes"
> ...
>
> ----
>
> I was trying to avoid multi-argument "methods" since there's no
> autocomplete in JMeter UI, and you never know the order and the meaning of
> the arguments.
> That is why I went with one-argument "rate(4/sec)" and "random_arrivals(5
> min)".
>
> The open question is how "the number of threads" should be configured, and
> I think it would go via threads(8).
>
> For instance,
> "10 threads, during 1min":
> rate(100500/ms) threads(10) random_arrivals(1 min)
>
> "start from 1 thread doing 1/sec and gradually go to 10 threads doing
> 100/sec in 1 min"
> threads(1) rate(1/sec) random_arrivals(10 min) threads(10) rate(100/sec)
>
> "threads(.)" and "rate(.)" could be put in any order.
>
> Felix>But as Vladimir explained, it would add a
> Felix>lot of unwanted dependecies in the core
>
> The overall increase is 16MiB where 1.8MiB is for kotlin-stdlib, and the
> rest is Apache Batik (for SVG) and lets-plot (for charting).
> I hope lets-plot does not need kotlin-reflect (2.9MiB):
> https://github.com/JetBrains/lets-plot/issues/471
> However, I think the chart does make it way easier to understand the load
> profile, so even 16MiB increase is ok.
>
> One of the options would be to split "core" into "core-impl" and "core-ui"
> where core-ui depends on core-impl and various UI libraries.
> That might be helpful for all the modules so non-gui users (e.g. via Maven
> or Gradle) do not have to pull UI dependencies.
> However, I think we can split modules later.
>
> On the other hand, jmeter-java-dsl has a clever mode when they launch "View
> Results Tree" Swing UI:
> https://abstracta.github.io/jmeter-java-dsl/guide/#view-results-tree, so
> they would depend on "view results tree UI" module anyway.
>
> Felix>* New contributors falling out of the sky
> Felix>I would like to see that happen, but haven't observed it, yet :)
>
> Looks like Kotlin won't make it worse :-)
>
> Felix>don't know the state of
> Felix>usability it has reached regarding Kotlin
>
> Frankly speaking, I've no idea how it works in Eclipse.
>
> Vladimir
>

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Felix>I think the old ThreadGroup has a nice and simple interface, that can
be
Felix>understood in a short time (my opinion :))

Well, the old thread group easily configures "N threads", however, as soon
as you need to set the desired request rate you are out of luck :-)
One of the key questions is "how many threads do I need?", and there's no
single answer since you don't want to over-provision and have 10000
idle threads.

Other solutions involve more intimate integration between the thread group
and the shaping timer (see
https://jmeter-plugins.org/wiki/ConcurrencyThreadGroup/#Use-With-Throughput-Shaping-Timer-Feedback-Function),
however, I would say the resulting configuration is far from being
intuitive.

Well, even placing a single timer in JMeter requires a PhD, so asking
"thread group + timer + feedback function" does not seem to be good for UX.

----

Just in case, I did consider adding something like "feedback function" to
the existing thread group, however, it looks like
the only use cases I have at hand are "making the test produce the desired
load without overprovisioning the threads".

In other words, integrating the load profile into the thread group covers
all my cases and it yields easier to understand configuration.


Felix>That should probably be run on some users and hear their feedback

I'm glad you asked.
Suggestions are welcome.

I did announce the thread group on Twitter:
https://twitter.com/VladimirSitnikv/status/1455141600213012489
I announced it in qa_load Telegram chat (Russian-mostly, ~3800
participants): https://t.me/qa_load/68193

I got exactly 0 comments/suggestions regarding the semantics :-/

The first comment I got was "please add a chart", and then someone managed
to get what they wanted with the following config:

${__groovy(def result = "";
for (def i=0;i<10;i++) {
  def pattern = " random_arrivals(%ss) rate(%s/sec) random_arrivals(%ss)";
  def step = props.get("start") + props.get("step")*i;
  result=result+String.format(pattern\, props.get("ramp")\, step\,
props.get("duration"));
};
return result;)}

A good thing is that when the properties are present in jmeter.properties,
then the UI displays the load profile even in case it is built via Groovy
script.

In case you wonder the groovy expression above evaluates with start=28,
step=28, ramp=60, duration=300 as follows:
random_arrivals(60s) rate(28/sec) random_arrivals(300s)
random_arrivals(60s) rate(56/sec) random_arrivals(300s)
random_arrivals(60s) rate(84/sec) random_arrivals(300s)
....
which is
"increase load from 0 to 28/sec during 60 sec", "hold 28/sec for 5 minutes"
"increase load from 28/sec to 56/sec during 60sec", "hold 56/sec for 5
minutes"
"increase load from 56/sec to 84/sec during 60sec", "hold 84/sec for 5
minutes"
...

----

I was trying to avoid multi-argument "methods" since there's no
autocomplete in JMeter UI, and you never know the order and the meaning of
the arguments.
That is why I went with one-argument "rate(4/sec)" and "random_arrivals(5
min)".

The open question is how "the number of threads" should be configured, and
I think it would go via threads(8).

For instance,
"10 threads, during 1min":
rate(100500/ms) threads(10) random_arrivals(1 min)

"start from 1 thread doing 1/sec and gradually go to 10 threads doing
100/sec in 1 min"
threads(1) rate(1/sec) random_arrivals(10 min) threads(10) rate(100/sec)

"threads(.)" and "rate(.)" could be put in any order.

Felix>But as Vladimir explained, it would add a
Felix>lot of unwanted dependecies in the core

The overall increase is 16MiB where 1.8MiB is for kotlin-stdlib, and the
rest is Apache Batik (for SVG) and lets-plot (for charting).
I hope lets-plot does not need kotlin-reflect (2.9MiB):
https://github.com/JetBrains/lets-plot/issues/471
However, I think the chart does make it way easier to understand the load
profile, so even 16MiB increase is ok.

One of the options would be to split "core" into "core-impl" and "core-ui"
where core-ui depends on core-impl and various UI libraries.
That might be helpful for all the modules so non-gui users (e.g. via Maven
or Gradle) do not have to pull UI dependencies.
However, I think we can split modules later.

On the other hand, jmeter-java-dsl has a clever mode when they launch "View
Results Tree" Swing UI:
https://abstracta.github.io/jmeter-java-dsl/guide/#view-results-tree, so
they would depend on "view results tree UI" module anyway.

Felix>* New contributors falling out of the sky
Felix>I would like to see that happen, but haven't observed it, yet :)

Looks like Kotlin won't make it worse :-)

Felix>don't know the state of
Felix>usability it has reached regarding Kotlin

Frankly speaking, I've no idea how it works in Eclipse.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Felix Schumacher <fe...@internetallee.de>.
Hi,

I was a bit silent on the last discussion, so I try to combine my
opinions on some of the points in this mail.

* Deprecating the "old" ThreadGroup.

I think the old ThreadGroup has a nice and simple interface, that can be
understood in a short time (my opinion :)). The new one should be as
easy for easy use cases.

Removing it would not be a good idea until the new one catches on (and
even then might better be kept)

* Semantic of the DSL

That should probably be run on some users and hear their feedback. For
me it was not clear, why "rate(0/min) even_arrivals(10m) rate(60/m)"
would result in a continuous ramp up and not a sharp edged one.

Maybe we could include the new Thread Group with a big *experimental*
label for a while.

* Including Kotlin and using it for further development

I once tried Scala and liked it for my simple tests. I haven't really
tried out Kotlin now, but I believe I could work with it.

However I really like Eclipse as my IDE and don't know the state of
usability it has reached regarding Kotlin. (I know, that I don't like
working on the gradle files for exactly that reason).

* Including lets-plot

The examples on the github repo are looking really nice and a nice
looking graphics is always a good way to attract more users. Therefore
it would be cool to have it. But as Vladimir explained, it would add a
lot of unwanted dependecies in the core. Could we work around this anyway?

* Moving to a newer Java baseline

That was  a topic I wanted to ask for a long time. More and more
libraries are 9+ or 11+ nowadays and we should follow up. I am
uncertain, whether to jump to 17 or 11, but either way, I would really
like to switch to something more modern.

* Using the Java Module system

I think it would be a lot of work to switch to the module system. We
would have to clean up our jars to be exclusive owner of their packages,
define the dependencies more explicit, look at our extensive usage of
reflection, ...

Is it possible to keep going on without the module system in newer Java
versions? I have no idea.

* New contributors falling out of the sky

I would like to see that happen, but haven't observed it, yet :)

Felix

Am 05.11.21 um 22:35 schrieb Vladimir Sitnikov:
>> I said that as scheduleParser.kt, scheduleTokenizer.kt are related to DSL,
>> I am ok with them being in Kotlin.
> Then it is a misunderstanding.
> They implement a typical textbook regular-expression-based tokenizer and
> parser.
>
> The files have nothing to do with Kotlin DSL for programmatic test plan
> generation so far.
>
> Of course, DSL is a very broad term, however, the key aim of
> scheduleParser.kt, scheduleTokenizer.kt
> is to parse **string** with load profile configuration (e.g. "rate(5/sec)
> random_arrivals(2 min) rate(10/sec)")
> and convert it to Java objects or throw a parse error (e.g. in case a user
> types an invalid time unit).
>
> That parsing logic can be implemented in Java, however, it would be
> cumbersome taking into account Java 8.
>
>> Now if you don't work in Kotlin
> My job is ~90% PL/SQL though.
>
>>> Philippe>Finalizing move to Java 11 without all the flags might be a
> first
>> step.
> Philippe>What about this ?
>
> TL;DR: I would support that, however, I am not keen on spending my time on
> doing the migration.
>
> Frankly speaking, I think Java 11 is out of date now, and, we, as an
> application can move to Java 17 right away.
> Here's Elasticsearch moving to Java 17:
> https://twitter.com/xeraa/status/1455980076001071106
> Upgrading Java baseline is probably worth doing, however, it is not clear
> what features does it bring.
>
> If we bump the baseline to Java 11 (or 17, does not really matter much),
> then we probably can have access to a broader set of libraries.
> For instance, https://github.com/kirill-grouchnikov/radiance is Java 9+.
>
> However, I am much more fascinated by creating the improved thread group,
> creating Kotlin-based DSL for test plan generation,
> creating in-core parallel controller, trying async clients (for HTTP/gRPC),
> etc.
> Java 11 does not seem to make those tasks significantly easier.
>
> Vladimir
>


Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
>I said that as scheduleParser.kt, scheduleTokenizer.kt are related to DSL,
>I am ok with them being in Kotlin.

Then it is a misunderstanding.
They implement a typical textbook regular-expression-based tokenizer and
parser.

The files have nothing to do with Kotlin DSL for programmatic test plan
generation so far.

Of course, DSL is a very broad term, however, the key aim of
scheduleParser.kt, scheduleTokenizer.kt
is to parse **string** with load profile configuration (e.g. "rate(5/sec)
random_arrivals(2 min) rate(10/sec)")
and convert it to Java objects or throw a parse error (e.g. in case a user
types an invalid time unit).

That parsing logic can be implemented in Java, however, it would be
cumbersome taking into account Java 8.

>Now if you don't work in Kotlin

My job is ~90% PL/SQL though.

>> Philippe>Finalizing move to Java 11 without all the flags might be a
first
> step.
Philippe>What about this ?

TL;DR: I would support that, however, I am not keen on spending my time on
doing the migration.

Frankly speaking, I think Java 11 is out of date now, and, we, as an
application can move to Java 17 right away.
Here's Elasticsearch moving to Java 17:
https://twitter.com/xeraa/status/1455980076001071106
Upgrading Java baseline is probably worth doing, however, it is not clear
what features does it bring.

If we bump the baseline to Java 11 (or 17, does not really matter much),
then we probably can have access to a broader set of libraries.
For instance, https://github.com/kirill-grouchnikov/radiance is Java 9+.

However, I am much more fascinated by creating the improved thread group,
creating Kotlin-based DSL for test plan generation,
creating in-core parallel controller, trying async clients (for HTTP/gRPC),
etc.
Java 11 does not seem to make those tasks significantly easier.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Philippe Mouawad <ph...@gmail.com>.
On Fri, Nov 5, 2021 at 9:18 PM Vladimir Sitnikov <
sitnikov.vladimir@gmail.com> wrote:

> Philippe>@All members, what do you think ?
>
> I guess this question is not only for committers, so don't hesitate to
> speak up.
>

For me,  contributors opinion has the biggest weight .
But I am ok to see what all say.

>
> Philippe>Maybe but in the current active members of the team how many  are
> as good
> Philippe>in Kotlin as in Java ?
>
> Frankly speaking, I do not think that is the right question to ask.
>

I think it is :-)

> The adoption grows (e.g. see
>
> https://developers.googleblog.com/2021/11/announcing-kotlin-support-for-protocol.html
> ),
> and developers can get productive in 2 weeks.
>

Maybe if you use Kotlin every day within your work (because you have to
otherwise you can leave) then maybe
you'll be able to move forward , but I don't believe that in 2 weeks you'll
be the equivalent of what you are in Java.

Now if you don't work in Kotlin, and have to learn it on your little
remaining time (not speaking about personal , family time) , then
for sure in 2 weeks I won't be.


> The language is statically typed, so there are no features that magically
> appear in the runtime.
> What you see is what gets executed.
>
> So I am sure, fixing a bug in Kotiln code is not dramatically different
> from fixing a bug in Java code.
> It might be even easier than fixing a bug in Groovy code since Groovy is
> dynamic.
>
> If fixing a bug or adding a feature requires creating new classes, you can
> still create Java classes.
> However, once you try Kotlin, you won't look back.
>
> Philippe>I am ok to use Kotlin when it brings missing features, but not
> when we can
> Philippe>use Java, in this particular case, I don't think those classes
> should be in
> Philippe>Kotlin in the PR:
>
> What do you think
> of scheduleParser.kt, scheduleTokenizer.kt, ThreadScheduleTest.kt?
>
> I think the files are more feature-dense than the files you listed, and if
> you are OK with reading scheduleParser.kt, scheduleTokenizer.kt,
> then PoissonRamp.kt, PreciseThreadGroup.kt, etc should not be a problem.
>

I said that as scheduleParser.kt, scheduleTokenizer.kt are related to DSL,
I am ok with them being in Kotlin.
I didn't say they were easy to read.


> Philippe>ok for DSL as I already wrote, benefit is clear
>
> I can easily imagine that making the existing classes dsl-friendly might
> require modifications.
> There might be dsl-only interfaces or classes. Do you mean we'll have to
> discuss the language for each and every class?
>
> Philippe>it's not used in the last  PR right ?
>
> No, I do not use coroutines and jetpack compose for the new thread group.
>
> Philippe>Finalizing move to Java 11 without all the flags might be a first
> step.
>
What about this ?

> Philippe>What about compatibility of the libraries we use ? I think many
> are not
> Philippe>compatible.
>
> Frankly speaking, I've no idea, however, I assume we can use Java 17
> without modules.
> In other words, we should be good provided the libraries do not use Unsafe
> and reflection against core Java classes.
> Our CI tests run with Java 14 now, and running with 17 should not be that
> different.
> Of course, migrating to Java modules would be a non-trivial task.
>

> However, I agree making all the clients use Java 17+ would be a non-trivial
> exercise for testing, documenting, etc.
>
I agree

> That means records, switch expressions, etc, are not really available for
> JMeter code, while Kotlin provides similar constructs right now.
>
> Vladimir
>


-- 
Cordialement.
Philippe Mouawad.

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
Philippe>@All members, what do you think ?

I guess this question is not only for committers, so don't hesitate to
speak up.

Philippe>Maybe but in the current active members of the team how many  are
as good
Philippe>in Kotlin as in Java ?

Frankly speaking, I do not think that is the right question to ask.
The adoption grows (e.g. see
https://developers.googleblog.com/2021/11/announcing-kotlin-support-for-protocol.html
),
and developers can get productive in 2 weeks.

The language is statically typed, so there are no features that magically
appear in the runtime.
What you see is what gets executed.

So I am sure, fixing a bug in Kotiln code is not dramatically different
from fixing a bug in Java code.
It might be even easier than fixing a bug in Groovy code since Groovy is
dynamic.

If fixing a bug or adding a feature requires creating new classes, you can
still create Java classes.
However, once you try Kotlin, you won't look back.

Philippe>I am ok to use Kotlin when it brings missing features, but not
when we can
Philippe>use Java, in this particular case, I don't think those classes
should be in
Philippe>Kotlin in the PR:

What do you think
of scheduleParser.kt, scheduleTokenizer.kt, ThreadScheduleTest.kt?

I think the files are more feature-dense than the files you listed, and if
you are OK with reading scheduleParser.kt, scheduleTokenizer.kt,
then PoissonRamp.kt, PreciseThreadGroup.kt, etc should not be a problem.

Philippe>ok for DSL as I already wrote, benefit is clear

I can easily imagine that making the existing classes dsl-friendly might
require modifications.
There might be dsl-only interfaces or classes. Do you mean we'll have to
discuss the language for each and every class?

Philippe>it's not used in the last  PR right ?

No, I do not use coroutines and jetpack compose for the new thread group.

Philippe>Finalizing move to Java 11 without all the flags might be a first
step.
Philippe>What about compatibility of the libraries we use ? I think many
are not
Philippe>compatible.

Frankly speaking, I've no idea, however, I assume we can use Java 17
without modules.
In other words, we should be good provided the libraries do not use Unsafe
and reflection against core Java classes.
Our CI tests run with Java 14 now, and running with 17 should not be that
different.
Of course, migrating to Java modules would be a non-trivial task.

However, I agree making all the clients use Java 17+ would be a non-trivial
exercise for testing, documenting, etc.
That means records, switch expressions, etc, are not really available for
JMeter code, while Kotlin provides similar constructs right now.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Philippe Mouawad <p....@ubik-ingenierie.com>.
On Fri, Nov 5, 2021 at 7:58 AM Vladimir Sitnikov <
sitnikov.vladimir@gmail.com> wrote:

> I do not think there are blockers, however:
> a) I use lets-plot for charting which is Kotlin library. I was not able to
> find a Java library (JFreeCharts license is not compatible with the ASF
> requirements)
> Frankly speaking, the chart was the last feature I added, and it was more
> of a coincidence.
>
no problem for this one, I agree it's very hard to find a good OS friendly
API for charting in Java.

b) Kotlin covers null safety <-- in my opinion, this is huge
>
this is not significant enough for me

c) There are cases for having Kotlin in project where Java solutions do not
> exist: DSL for programmatic testplan generation,

ok for DSL as I already wrote, benefit is clear


> coroutines (well, Loom +
> virtual threads might be just enough, however, the release date is
> moot),


it's not used in the last  PR right ?

JetPack
> Compose might be a good fit for UI
> d) Kotlin is less verbose, and there are lots of convenience things like
> if-expressions, when-expressions, readonly lists and maps, and so on.
>

Maybe but in the current active members of the team how many  are as good
in Kotlin as in Java ?
Although it's great you're contributing actively these last weeks, what
will happen when you'll be less
active ?
I must say I consider myself a very junior in Kotlin, while I hope I am a
bit better in Java.
And regarding verbosity, sometimes it gives  more readable/immediately
understandable code IMO.

e) Kotlin is easier to read: less verbosity yield better signal to noise
> ratio. Even though, IDEs can generate all the Java boilerplate code, and I
> do not type it, I still have to read it.
>

It's an opinion. I don't have the same. As of me, I don't find it easier to
read at all.

f) The language interops with Java just fine, and it does not require data
> conversion.
>
Agreed.


> I do not intend to rewrite the existing code in Kotlin in one day, however,
> I think it makes sense to write new code in Kotlin.
>

I think that would require a discussion before going further.
I am ok to use Kotlin when it brings missing features, but not when we can
use Java, in this particular case, I don't think those classes should be in
Kotlin in the PR:

   -
   src/core/src/main/kotlin/org/apache/jmeter/threads/precise/PoissonRamp.kt
   -
   src/core/src/main/kotlin/org/apache/jmeter/threads/precise/PreciseThreadGroup.kt
   -
   src/core/src/main/kotlin/org/apache/jmeter/threads/precise/PreciseThreadGroupController.kt
   -
   src/core/src/main/kotlin/org/apache/jmeter/threads/precise/ThreadScheduleProcessGenerator.kt
   -
   src/core/src/main/kotlin/org/apache/jmeter/threads/precise/gui/PreciseThreadGroupGui.kt

Those are critical classes if the element is used and we will have to dig
into edgy cases , as of me, I don't feel confident enough
maintaining current PR code
@All members, what do you think ?



> By the way, it might be time to bump language version to Java 17 for JMeter
> code. Moving to 17 would ease some of the pain points (records, switch
> expressions), however, it does not resolve all of them.
>

Finalizing move to Java 11 without all the flags might be a first step.
What about compatibility of the libraries we use ? I think many are not
compatible.


> Vladimir
>


-- 
Regards
Philippe M.
Ubik-Ingenierie

Re: Move "precise throughput computation" to thread group

Posted by Vladimir Sitnikov <si...@gmail.com>.
I do not think there are blockers, however:
a) I use lets-plot for charting which is Kotlin library. I was not able to
find a Java library (JFreeCharts license is not compatible with the ASF
requirements)
Frankly speaking, the chart was the last feature I added, and it was more
of a coincidence.
b) Kotlin covers null safety <-- in my opinion, this is huge
c) There are cases for having Kotlin in project where Java solutions do not
exist: DSL for programmatic testplan generation, coroutines (well, Loom +
virtual threads might be just enough, however, the release date is
moot), JetPack
Compose might be a good fit for UI
d) Kotlin is less verbose, and there are lots of convenience things like
if-expressions, when-expressions, readonly lists and maps, and so on.
e) Kotlin is easier to read: less verbosity yield better signal to noise
ratio. Even though, IDEs can generate all the Java boilerplate code, and I
do not type it, I still have to read it.
f) The language interops with Java just fine, and it does not require data
conversion.

I do not intend to rewrite the existing code in Kotlin in one day, however,
I think it makes sense to write new code in Kotlin.

By the way, it might be time to bump language version to Java 17 for JMeter
code. Moving to 17 would ease some of the pain points (records, switch
expressions), however, it does not resolve all of them.

Vladimir

Re: Move "precise throughput computation" to thread group

Posted by Philippe Mouawad <ph...@gmail.com>.
I understand that Kotlin is better for DSL, so I would suggest to restrict
Kotlin use to this part only unless there is a blocker in Java.

On Fri, Nov 5, 2021 at 12:24 AM Philippe Mouawad <ph...@gmail.com>
wrote:

> Hello Vladimir,
>
> Thanks for the PR and work on it.
>
> I started looking at code, and I am wondering if there is a particular
> reason for not using Java for the new development ?
> I must say I am not comfortable with mixing Java and Kotlin in the project.
> I have reduced my activity on project these last month, so my opinion does
> not count as much as a more active contributor as Felix.
>
> But if I was as active as I have been in the past (I may be more active in
> the future), I would tend to stick to Java unless Kotlin brings something
> that is not available in Java.
> In the past, there was this PR https://github.com/apache/jmeter/pull/540
> which was using Kotlin for a good reason I think.
> In this particular case , I don't see it.
>
> Regards
> Philippe
>
>
> On Tue, Nov 2, 2021 at 9:58 PM Antonio Gomes Rodrigues <ra...@gmail.com>
> wrote:
>
>> Hi Vladimir,
>>
>> It looks good, I will test it asap
>>
>> Do you plan to add a feature to load/save events fired time (seed) to have
>> exactly the same repeatable sequence for each test execution?
>>
>> Le dim. 31 oct. 2021 à 21:01, Vladimir Sitnikov <
>> sitnikov.vladimir@gmail.com>
>> a écrit :
>>
>> > I wanted to add a chart with "desired load profile", and I found
>> > https://github.com/JetBrains/lets-plot charting library.
>> > The screenshot is in the PR
>> >
>> > It adds some jars to the distribution, however, I think it would be
>> hard to
>> > find another library.
>> > The unfortunate consequence is that UI dependencies become "core"
>> > dependencies. I've no idea what to do with that,
>> > and we might probably want to move UI classes to core-ui module or
>> > something like that.
>> >
>> > The current dependency diff is as follows:
>> >
>> >     54852334 => 71166807 bytes (+16314473 bytes)
>> >     99 => 134 files (+35)
>> >
>> >   +   17536 annotations-13.0.jar
>> >   -   18538 annotations-16.0.2.jar
>> >   +  507197 base-portable-jvm-2.1.0.jar
>> >   +  485809 batik-anim-1.14.jar
>> >   +  424624 batik-awt-util-1.14.jar
>> >   +  703757 batik-bridge-1.14.jar
>> >   +  112373 batik-codec-1.14.jar
>> >   +    8433 batik-constants-1.14.jar
>> >   +  330318 batik-css-1.14.jar
>> >   +  184487 batik-dom-1.14.jar
>> >   +   10238 batik-ext-1.14.jar
>> >   +  192087 batik-gvt-1.14.jar
>> >   +   11466 batik-i18n-1.14.jar
>> >   +   76875 batik-parser-1.14.jar
>> >   +   25876 batik-script-1.14.jar
>> >   +    6663 batik-shared-resources-1.14.jar
>> >   +  232734 batik-svg-dom-1.14.jar
>> >   +  227514 batik-svggen-1.14.jar
>> >   +  129300 batik-transcoder-1.14.jar
>> >   +  127477 batik-util-1.14.jar
>> >   +   33866 batik-xml-1.14.jar
>> >   +   32033 kotlin-logging-jvm-2.0.5.jar
>> >   + 2993765 kotlin-reflect-1.5.21.jar
>> >   + 1505952 kotlin-stdlib-1.5.31.jar
>> >   +  198322 kotlin-stdlib-common-1.5.31.jar
>> >   +   22986 kotlin-stdlib-jdk7-1.5.31.jar
>> >   +   16121 kotlin-stdlib-jdk8-1.5.31.jar
>> >   +  792176 kotlinx-html-jvm-0.7.3.jar
>> >   +  196707 lets-plot-batik-2.1.0.jar
>> >   + 3593892 lets-plot-common-2.1.0.jar
>> >   +  627895 plot-api-jvm-3.0.2.jar
>> >   +  882509 plot-base-portable-jvm-2.1.0.jar
>> >   +  792534 plot-builder-portable-jvm-2.1.0.jar
>> >   +  115007 plot-common-portable-jvm-2.1.0.jar
>> >   +  454864 plot-config-portable-jvm-2.1.0.jar
>> >   +  173932 vis-svg-portable-jvm-2.1.0.jar
>> >   +   85686 xml-apis-ext-1.3.04.jar
>> >
>> > Vladimir
>> >
>>
>
>
> --
> Cordialement.
> Philippe Mouawad.
>
>
>

-- 
Cordialement.
Philippe Mouawad.

Re: Move "precise throughput computation" to thread group

Posted by Philippe Mouawad <ph...@gmail.com>.
Hello Vladimir,

Thanks for the PR and work on it.

I started looking at code, and I am wondering if there is a particular
reason for not using Java for the new development ?
I must say I am not comfortable with mixing Java and Kotlin in the project.
I have reduced my activity on project these last month, so my opinion does
not count as much as a more active contributor as Felix.

But if I was as active as I have been in the past (I may be more active in
the future), I would tend to stick to Java unless Kotlin brings something
that is not available in Java.
In the past, there was this PR https://github.com/apache/jmeter/pull/540
which was using Kotlin for a good reason I think.
In this particular case , I don't see it.

Regards
Philippe


On Tue, Nov 2, 2021 at 9:58 PM Antonio Gomes Rodrigues <ra...@gmail.com>
wrote:

> Hi Vladimir,
>
> It looks good, I will test it asap
>
> Do you plan to add a feature to load/save events fired time (seed) to have
> exactly the same repeatable sequence for each test execution?
>
> Le dim. 31 oct. 2021 à 21:01, Vladimir Sitnikov <
> sitnikov.vladimir@gmail.com>
> a écrit :
>
> > I wanted to add a chart with "desired load profile", and I found
> > https://github.com/JetBrains/lets-plot charting library.
> > The screenshot is in the PR
> >
> > It adds some jars to the distribution, however, I think it would be hard
> to
> > find another library.
> > The unfortunate consequence is that UI dependencies become "core"
> > dependencies. I've no idea what to do with that,
> > and we might probably want to move UI classes to core-ui module or
> > something like that.
> >
> > The current dependency diff is as follows:
> >
> >     54852334 => 71166807 bytes (+16314473 bytes)
> >     99 => 134 files (+35)
> >
> >   +   17536 annotations-13.0.jar
> >   -   18538 annotations-16.0.2.jar
> >   +  507197 base-portable-jvm-2.1.0.jar
> >   +  485809 batik-anim-1.14.jar
> >   +  424624 batik-awt-util-1.14.jar
> >   +  703757 batik-bridge-1.14.jar
> >   +  112373 batik-codec-1.14.jar
> >   +    8433 batik-constants-1.14.jar
> >   +  330318 batik-css-1.14.jar
> >   +  184487 batik-dom-1.14.jar
> >   +   10238 batik-ext-1.14.jar
> >   +  192087 batik-gvt-1.14.jar
> >   +   11466 batik-i18n-1.14.jar
> >   +   76875 batik-parser-1.14.jar
> >   +   25876 batik-script-1.14.jar
> >   +    6663 batik-shared-resources-1.14.jar
> >   +  232734 batik-svg-dom-1.14.jar
> >   +  227514 batik-svggen-1.14.jar
> >   +  129300 batik-transcoder-1.14.jar
> >   +  127477 batik-util-1.14.jar
> >   +   33866 batik-xml-1.14.jar
> >   +   32033 kotlin-logging-jvm-2.0.5.jar
> >   + 2993765 kotlin-reflect-1.5.21.jar
> >   + 1505952 kotlin-stdlib-1.5.31.jar
> >   +  198322 kotlin-stdlib-common-1.5.31.jar
> >   +   22986 kotlin-stdlib-jdk7-1.5.31.jar
> >   +   16121 kotlin-stdlib-jdk8-1.5.31.jar
> >   +  792176 kotlinx-html-jvm-0.7.3.jar
> >   +  196707 lets-plot-batik-2.1.0.jar
> >   + 3593892 lets-plot-common-2.1.0.jar
> >   +  627895 plot-api-jvm-3.0.2.jar
> >   +  882509 plot-base-portable-jvm-2.1.0.jar
> >   +  792534 plot-builder-portable-jvm-2.1.0.jar
> >   +  115007 plot-common-portable-jvm-2.1.0.jar
> >   +  454864 plot-config-portable-jvm-2.1.0.jar
> >   +  173932 vis-svg-portable-jvm-2.1.0.jar
> >   +   85686 xml-apis-ext-1.3.04.jar
> >
> > Vladimir
> >
>


-- 
Cordialement.
Philippe Mouawad.