You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@royale.apache.org by Harbs <ha...@gmail.com> on 2017/11/05 10:14:53 UTC

Test Beads (was Re: Unit Tests et. al.)

I wanted to branch this into a separate discussion because I want to discuss whether this is a good idea or a bad idea on its own.

Harbs
> On Nov 5, 2017, at 11:55 AM, Harbs <ha...@gmail.com> wrote:
> 
> I just had an interesting idea for solving the component testing problem in a Royale-specific way which might be a nice advantage over other frameworks:
> 
> Testing Beads.
> 
> The problem with component test seem to be the following:
> 1. Testing at the correct point in the component lifecycle.
> 2. Being able to address specific components and their parts.
> 3. Being able to fail-early on tests that don’t require complete loading.
> 4. Ensuring that all tests complete — which usually means synchronous execution of tests.
> 
> Testing beads seem like they should be able to solve these problems in an interesting way.
> 
> Basically, a testing bead would be a bead which has an interface which:
> a. Reports test passes.
> b. reports test failures.
> c. reports ignored test.
> d. Reports when all tests are done.
> 
> It would work something like this:
> 1. A test runner/test app, would create components and add testing beads to the components.
> 2. It would retain references to the testing beads and listen for results from the beads.
> 3. The test runner would run the app.
> 4. Each test bead would take care of running its own tests and report back when done.
> 5. Once all the test beads report success or a bead reports failure, the test runner would exit with the full report.
> 
> This would have the following advantages:
> 1. All tests could run in parallel. This would probably speed up test runs tremendously. Async operations would not block other tests from being run.
> 2. There’s no need for the test runner to worry about life-cycles. The bead would be responsible to test at the correct point in the lifecycle.
> 3. The first test to fail could exit. Failing early could make the test run much quicker when tests fail.
> 4. You could have an option to have the test runner either report all failing tests or fail early on the first one.
> 5. Running tests should be simple with a well-defined interface, and the actual tests could be as simple or as complicated as necessary.
> 
> This seems like a very good solution from framework development.
> 
> I’m not sure how this concept could be used for application development.  I guess an application developer could create a parallel testing app which is the same as the app plus testing beads, but that seems awkward.
> 
> Maybe it’s possible to use a testing CSS file which would add testing beads to components for testing builds, the problem with that is that there’s a requirement for code to add those beads.
> 
> Maybe we can add special tags for adding the beads via MXML and/or ActionScript which could be stripped out for non-test builds.
> 
> Food for thought…
> Harbs


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Harbs <ha...@gmail.com>.
> I guess we are not understanding each other.

Probably not. The best way to discuss this is likely a POC.

> If the testing language is
> AS or JS, test authors have to know how to deal with the runtime
> differences. 

It depends on the tests. If the code is platform agnostic, the tests should be platform agnostic as well. If the code is platform-specific, the tests could have similar platform specific blocks. It does not seem to me like it’s a difficult problem.

> If you want to build up a test harness of tests written in AS, I would
> recommend starting with FlexUnit…

Yup. This is the low-hanging fruit here.

> If you want to run tests that require the runtime, I think Mustella might
> be a good starting point instead of trying to re-invent it.

Maybe. Once I’m finished with the unit tests, I’ll try to figure out where I stand with integration tests. (unless someone else  gets to it first) It “feels” to like the architecture I’m proposing is simpler and more powerful, but I could be wrong.

> On Nov 7, 2017, at 7:54 PM, Alex Harui <ah...@adobe.com.INVALID> wrote:
> 
> I guess we are not understanding each other.  If the testing language is
> AS or JS, test authors have to know how to deal with the runtime
> differences.  That's why Mustella uses MXML.  Automated test code
> generation could also abstract those differences from the test authors.
> 
> If you want to build up a test harness of tests written in AS, I would
> recommend starting with FlexUnit (as it appears you are doing) and limit
> tests to being small units that don't require the runtime.
> 
> If you want to run tests that require the runtime, I think Mustella might
> be a good starting point instead of trying to re-invent it.
> 
> Of course, I could be wrong...
> -Alex
> 
> On 11/7/17, 9:40 AM, "Harbs" <ha...@gmail.com> wrote:
> 
>> Right. I’m proposing a totally different architecture.
>> 
>> In the architecture I’m proposing, the runner is a passive observer. The
>> tests would be run by *the beads themselves* and *push* the results out
>> to the runner.
>> 
>> The runner would have a count of the number of tests that are supposed to
>> be run, and when all the tests return (or a fail-early test comes back)
>> the runner exits with the pass/fail result.
>> 
>> To be clear, there would be *two* separate architectures.
>> 
>> 1. Unit tests would be reserved for simple tests which could be run
>> without waiting for UI things to happen. That would use an active test
>> runner.
>> 2. Integration tests would allow for complex and async tests where the
>> test runner would be passive.
>> 
>> Hope this is clearer…
>> Harbs
>> 
>>> On Nov 7, 2017, at 7:33 PM, Alex Harui <ah...@adobe.com.INVALID> wrote:
>>> 
>>> If the runner calls testBead.test(), the next line of code cannot check
>>> for results.
>>> 
>>> for (i = 0; i < numTests; i++) {
>>>   testBead[i].test():
>>>   if (testBead[i].failed) {
>>>      // record failure
>>>   }
>>> }
>> 
> 


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Alex Harui <ah...@adobe.com.INVALID>.
I guess we are not understanding each other.  If the testing language is
AS or JS, test authors have to know how to deal with the runtime
differences.  That's why Mustella uses MXML.  Automated test code
generation could also abstract those differences from the test authors.

If you want to build up a test harness of tests written in AS, I would
recommend starting with FlexUnit (as it appears you are doing) and limit
tests to being small units that don't require the runtime.

If you want to run tests that require the runtime, I think Mustella might
be a good starting point instead of trying to re-invent it.

Of course, I could be wrong...
-Alex

On 11/7/17, 9:40 AM, "Harbs" <ha...@gmail.com> wrote:

>Right. I’m proposing a totally different architecture.
>
>In the architecture I’m proposing, the runner is a passive observer. The
>tests would be run by *the beads themselves* and *push* the results out
>to the runner.
>
>The runner would have a count of the number of tests that are supposed to
>be run, and when all the tests return (or a fail-early test comes back)
>the runner exits with the pass/fail result.
>
>To be clear, there would be *two* separate architectures.
>
>1. Unit tests would be reserved for simple tests which could be run
>without waiting for UI things to happen. That would use an active test
>runner.
>2. Integration tests would allow for complex and async tests where the
>test runner would be passive.
>
>Hope this is clearer…
>Harbs
>
>> On Nov 7, 2017, at 7:33 PM, Alex Harui <ah...@adobe.com.INVALID> wrote:
>> 
>> If the runner calls testBead.test(), the next line of code cannot check
>> for results.
>> 
>>  for (i = 0; i < numTests; i++) {
>>    testBead[i].test():
>>    if (testBead[i].failed) {
>>       // record failure
>>    }
>>  }
>


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Harbs <ha...@gmail.com>.
Right. I’m proposing a totally different architecture.

In the architecture I’m proposing, the runner is a passive observer. The tests would be run by *the beads themselves* and *push* the results out to the runner.

The runner would have a count of the number of tests that are supposed to be run, and when all the tests return (or a fail-early test comes back) the runner exits with the pass/fail result.

To be clear, there would be *two* separate architectures.

1. Unit tests would be reserved for simple tests which could be run without waiting for UI things to happen. That would use an active test runner.
2. Integration tests would allow for complex and async tests where the test runner would be passive.

Hope this is clearer…
Harbs

> On Nov 7, 2017, at 7:33 PM, Alex Harui <ah...@adobe.com.INVALID> wrote:
> 
> If the runner calls testBead.test(), the next line of code cannot check
> for results.
> 
>  for (i = 0; i < numTests; i++) {
>    testBead[i].test():
>    if (testBead[i].failed) {
>       // record failure
>    }
>  }


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Alex Harui <ah...@adobe.com.INVALID>.
Snip...

On 11/7/17, 12:33 AM, "Harbs" <ha...@gmail.com> wrote:

>>To me the "Later" problem is about how to sequential lines of
>> ActionScript/JavaScript don't get run sequentially in order for the
>> runtime to do some processing.  I don't understand how a bead can do
>>that
>> if the tests are written in a non-declarative language.
>
>Events. I’m making the assumption that all integration tests would be
>written inside a test bead. For example, layout testing could set some
>properties and then listen for layoutComplete to check that the layout
>was done correctly.

If the runner calls testBead.test(), the next line of code cannot check
for results.

  for (i = 0; i < numTests; i++) {
    testBead[i].test():
    if (testBead[i].failed) {
       // record failure
    }
  }

TestBead.test() cannot set up a listener for layoutComplete, because in
Flash and sometimes in the browser, the code may not fire layoutComplete
until the loop above finishes and gives the player or browser a chance to
render the screen (and thus determine the size and scroll parameters of a
TextField or Input element).

At least, that's my understanding.
-Alex



Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Harbs <ha...@gmail.com>.
Comments inline.

> Each Mixin brings in one class, but that class can drag in tons of stuff.
> The key question for our users is how they want to determine what gets
> tested.  From prior Adobe Flex users, they didn't want to add testing
> overhead to every MXML component, only certain ones, and often needed to
> assign it a different name than the id in order to have meaningful output.
> Especially because an id can be used in more than one place in MXML.
> Individual automation beads can be placed on each instance that you want
> tested, but that changes the source code.  Having an external map that the
> Mixin uses to walk the DOM and add beads doesn't affect the source code.

This is clearly beyond my level of expertise. It would probably be helpful to get input from users who have had clear testing strategies in the past.

This is a topic that I’ll probably be able to better grok once we have framework unit and integration testing more solid.

> To me the "Later" problem is about how to sequential lines of
> ActionScript/JavaScript don't get run sequentially in order for the
> runtime to do some processing.  I don't understand how a bead can do that
> if the tests are written in a non-declarative language.

Events. I’m making the assumption that all integration tests would be written inside a test bead. For example, layout testing could set some properties and then listen for layoutComplete to check that the layout was done correctly.

I’d probably make TestBeadBase have some kind of test() method to properly route the test and results to the main test runner.

>> 
>> I’m not sure what you mean by this. What timeouts are you concerned by?
> 
> Flash for sure won't let you run code for more than 60 seconds without
> letting the player do its thing.  I thought there were timeouts for
> JavaScript in browsers, and potentially for operating systems thinking a
> process is "not responding".  The runtime probably needs to be given a
> chance to do something between tests.

If the test runner is running in node, node is async, so this is not a problem.

If it’s running in a browser it can also be async using events and callbacks.

I’m not sure where there would be a tight loop that would cause problems.

> One theory of testing says that you should test boundary conditions of
> every code path as well as some intermediate values.  Royale should have
> fewer "if" statements and other code path forks in the beads because we
> are trying to write PAYG code and every "if" theoretically introduces
> "just-in-case" code.
> 
> So, in theory, if you could describe the boundary conditions in metadata,
> you could write a test case generator.  I do not enjoy writing and
> debugging test cases so having something generate the tests would make
> life much easier for me.

The theory sounds good to me… ;-)

Harbs


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Alex Harui <ah...@adobe.com.INVALID>.

On 11/6/17, 2:06 PM, "Harbs" <ha...@gmail.com> wrote:

>Lots of points here.
>
>I’m not an expert either, but I’ll try to add my 2 cents…
>
>> My temptation would be to leverage the [Mixin] capability in the
>>compiler
>> instead of additional/different CSS.  Then it is just a command-line
>> option to inject a class that gets initialized early and can do other
>> things (including bringing in additional/different CSS).  However, I
>>have
>> been considering some sort of compiler option that injects beads on the
>> main application's strand.
>
>This sounds very interesting.
>
>That would sort of require a single bead attached to the application.
>It’s probably wor-able, but it makes fine grained testing a bit harder.
>
>I wonder if we could utilize Mixin tags to add beads to classes and MXML
>files using the same compiler option. That would allow dividing the app
>into “units” of testing where the developer thinks it makes sense.

Each Mixin brings in one class, but that class can drag in tons of stuff.
The key question for our users is how they want to determine what gets
tested.  From prior Adobe Flex users, they didn't want to add testing
overhead to every MXML component, only certain ones, and often needed to
assign it a different name than the id in order to have meaningful output.
 Especially because an id can be used in more than one place in MXML.
Individual automation beads can be placed on each instance that you want
tested, but that changes the source code.  Having an external map that the
Mixin uses to walk the DOM and add beads doesn't affect the source code.
>
>
>> I believe the component/framework testing must figure out how to run the
>> next test step "later".  And that's hard in AS and JS.  Or else, we
>>needs
>> mocks or we restrict component tests to units that don't require any
>> runtime support.  I'm not sure you can solve the "later" problem with
>> beads, but it would be great if you can.
>
>I think the later problem can be solved very nicely by beads. The bead
>could run tests at whatever point it wants. It could add an event
>listener to the strand and/or other beads to run specific tests at
>specific points.

To me the "Later" problem is about how to sequential lines of
ActionScript/JavaScript don't get run sequentially in order for the
runtime to do some processing.  I don't understand how a bead can do that
if the tests are written in a non-declarative language.
>
>It keeps track of all its tests and sends notification to the test runner
>when it’s done with the results and/or sends the results as the
>individual tests are run. The total number of tests could be set
>manually, or it could be calculated automatically by [Test] metadata tags.
>
>> It also has to figure out how to
>> handle the script timeout issue as well.  Once we decide on that, it
>>just
>> comes a matter of writing more tests.
>
>I’m not sure what you mean by this. What timeouts are you concerned by?

Flash for sure won't let you run code for more than 60 seconds without
letting the player do its thing.  I thought there were timeouts for
JavaScript in browsers, and potentially for operating systems thinking a
process is "not responding".  The runtime probably needs to be given a
chance to do something between tests.
>
>> Since we are brainstorming, I want to mention that I have dreams of
>> automatically generating tests from metadata.
>
>Sounds like an interesting idea, but to be honest you lost me from the
>start… ;-)
>
>I think these kinds of things are fundamentally incompatible with my
>brain, and I’ll probably have a hard time wrapping my head around this…
>;-)

One theory of testing says that you should test boundary conditions of
every code path as well as some intermediate values.  Royale should have
fewer "if" statements and other code path forks in the beads because we
are trying to write PAYG code and every "if" theoretically introduces
"just-in-case" code.

So, in theory, if you could describe the boundary conditions in metadata,
you could write a test case generator.  I do not enjoy writing and
debugging test cases so having something generate the tests would make
life much easier for me.

My 2 cents,
-Alex
>
>> On Nov 6, 2017, at 8:35 PM, Alex Harui <ah...@adobe.com.INVALID> wrote:
>> 
>> Disclaimer:  I am not an expert on automated testing, but I was involved
>> in many discussions around the time Flex was donated to Apache.  So I
>>have
>> some knowledge, but it might be stale.  Here are some thoughts on this
>> topic.
>> 
>> To respond to the subject:  as in the skinning/theming thread, I
>>wouldn't
>> worry about beads right now.  Beads are just encapsulations of code
>> snippets.  In complex situations like these, it is often better just to
>> "get the code to work", then get someone else to "get the code to work"
>>in
>> a different scenario and then see what needs to be parameterized and
>> re-used.
>> 
>> I'm unclear as to how much we need to do along the lines of automated
>> testing for Applications.  There are existing tools tuned for automating
>> Application testing. It would be great to hear from users as to whether
>> they have already chosen an automated testing tool for other
>>Applications.
>> Flex, for example, provided integration with the QTP testing system.
>> Maybe people want us to leverage QTP or RIATest, or something else.
>>Also,
>> Microsoft was trying to formalize automated testing for Windows app.  I
>> don't know if our users are using that or not.
>> 
>> Microsoft was introducing the notion of "roles" as part of the WAI-ARIA
>> standard [1] and building a test harness around that.  We've spent a
>> little bit of time thinking about that in Royale.  The NumericStepper is
>> no longer a single component like it was in Flex, but rather, two
>> components (Input and up/down control) in order to conform to WAI-ARIA
>>not
>> just for testing but someday for accessibility.
>> 
>> Because of beads, there should be relatively few "private" parts to a
>> component, so I don't know how much code will be needed to access
>>things,
>> especially in JS where nothing is truly private anyway.
>> 
>> Because of PAYG, we do want to have some other code set the additional
>> information the automated testing tools need.  IIRC, not every tag in
>>MXML
>> needs to be tested, so adding a bead to specific MXML tags to mark them
>> for the testing tools makes sense, but then you can't make it completely
>> go away at runtime.
>> 
>> I often thought a key feature of PAYG and automated testing would be
>>that,
>> without touching the code, you could add some compiler option and inject
>> all of the extra data.  I think this is technically possible, and I
>>think
>> this is what you are discussing in this thread, but I'm not sure if
>>folks
>> want that or not.  If you don't want to touch the code, managing an
>> external map instead might be too painful.  Don't know, we should just
>>try
>> it.
>> 
>> My temptation would be to leverage the [Mixin] capability in the
>>compiler
>> instead of additional/different CSS.  Then it is just a command-line
>> option to inject a class that gets initialized early and can do other
>> things (including bringing in additional/different CSS).  However, I
>>have
>> been considering some sort of compiler option that injects beads on the
>> main application's strand.
>> 
>> But the above is all about automated Application testing.  IMO,
>> component/framework testing is different.
>> 
>> I believe the component/framework testing must figure out how to run the
>> next test step "later".  And that's hard in AS and JS.  Or else, we
>>needs
>> mocks or we restrict component tests to units that don't require any
>> runtime support.  I'm not sure you can solve the "later" problem with
>> beads, but it would be great if you can.  It also has to figure out how
>>to
>> handle the script timeout issue as well.  Once we decide on that, it
>>just
>> comes a matter of writing more tests.
>> 
>> Since we are brainstorming, I want to mention that I have dreams of
>> automatically generating tests from metadata.  Our framework code has
>>very
>> few functions/methods that are called by the Application developer.
>> Instead, most of the code we write are functions as setters and getters,
>> and event handlers.  Adding metadata to each of our functions seems way
>> more efficient than writing tests for each one, and might help solve the
>> "later" problem as the test harness could have control over when to make
>> the function call and when to test for the results.
>> 
>> So, some getter could have metadata that is something like:
>> 
>> [Test[type="getter", initialValue="0", minValue="int.MIN_VALUE",
>> maxValue="int.MAX_VALUE")]
>> Function get value():int;
>> 
>> And that would generate several tests:
>> 
>>  var comp:Foo = new Foo();
>>  Assert(comp.value, is(0))
>> 
>>  comp.foo = int.MIN_VALUE;
>>  Assert(comp.value, is(int.MIN_VALUE));
>> 
>>  comp.foo = int.MAX_VALUE;
>>  Assert(comp.value, is(int.MAX_VALUE));
>> 
>> And even, if we add more metadata about out-of-range:
>> 
>> [Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE",
>> outOfRangeMin="exception")]
>> Function get value():int;
>> 
>>  try {
>>    comp.foo = -1; // (minValue - 1)
>>  } catch (e:Error) {
>>    Success();
>>  }
>>  Failure();
>> 
>> [Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE",
>> outOfRangeMin="0")]
>> Function get value():int;
>> 
>>  comp.foo = -1; // (minValue - 1)
>>  Assert(comp.value, is(0))
>> 
>> 
>> 
>> An Event handler might look like:
>> 
>> [Test("eventType='org.apache.flex.events.MouseEvent', type="click",
>> data="localx:0;localy:0" resultEvent="stateChange")]
>> function clickHandler(e:MouseEvent):void
>> {
>> }
>> 
>> 
>> 
>> 
>> 
>> 
>> And result in:
>>  var comp:Foo = new Foo();
>>  Var e:Event = new org.apache.flex.events.MouseEvent('click');
>>  e["localx"] = 0;
>>  e["localy"] = 0;
>>  Comp.addEventListener("stateChange", genericEventListener);
>>  comp.clickHandler(e);
>>  AssertEvent(was(0))
>> 
>> 
>> If we want to do integration testing that requires the runtime, we could
>> add a "wait" tag to the metadata and the test engine would do what it
>> needs to in order for the runtime to do some processing.
>> 
>> My 2 cents,
>> -Alex
>> 
>> [1] 
>>https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.w3.o
>>rg%2FWAI%2Fintro%2Faria&data=02%7C01%7C%7Cefd3cb6bdd3d419c71bd08d52562bb4
>>9%7Cfa7b1b5a7b34438794aed2c178decee1%7C0%7C0%7C636456028337090213&sdata=d
>>xS8C4NfVbqXRmymt1sbJxQ322csY2hVEKZnV%2Fvvwuw%3D&reserved=0
>> 
>> On 11/5/17, 1:14 AM, "Harbs" <ha...@gmail.com> wrote:
>> 
>>> I wanted to branch this into a separate discussion because I want to
>>> discuss whether this is a good idea or a bad idea on its own.
>>> 
>>> Harbs
>>>> On Nov 5, 2017, at 11:55 AM, Harbs <ha...@gmail.com> wrote:
>>>> 
>>>> I just had an interesting idea for solving the component testing
>>>> problem in a Royale-specific way which might be a nice advantage over
>>>> other frameworks:
>>>> 
>>>> Testing Beads.
>>>> 
>>>> The problem with component test seem to be the following:
>>>> 1. Testing at the correct point in the component lifecycle.
>>>> 2. Being able to address specific components and their parts.
>>>> 3. Being able to fail-early on tests that don’t require complete
>>>> loading.
>>>> 4. Ensuring that all tests complete — which usually means synchronous
>>>> execution of tests.
>>>> 
>>>> Testing beads seem like they should be able to solve these problems in
>>>> an interesting way.
>>>> 
>>>> Basically, a testing bead would be a bead which has an interface
>>>>which:
>>>> a. Reports test passes.
>>>> b. reports test failures.
>>>> c. reports ignored test.
>>>> d. Reports when all tests are done.
>>>> 
>>>> It would work something like this:
>>>> 1. A test runner/test app, would create components and add testing
>>>> beads to the components.
>>>> 2. It would retain references to the testing beads and listen for
>>>> results from the beads.
>>>> 3. The test runner would run the app.
>>>> 4. Each test bead would take care of running its own tests and report
>>>> back when done.
>>>> 5. Once all the test beads report success or a bead reports failure,
>>>> the test runner would exit with the full report.
>>>> 
>>>> This would have the following advantages:
>>>> 1. All tests could run in parallel. This would probably speed up test
>>>> runs tremendously. Async operations would not block other tests from
>>>> being run.
>>>> 2. There’s no need for the test runner to worry about life-cycles. The
>>>> bead would be responsible to test at the correct point in the
>>>>lifecycle.
>>>> 3. The first test to fail could exit. Failing early could make the
>>>>test
>>>> run much quicker when tests fail.
>>>> 4. You could have an option to have the test runner either report all
>>>> failing tests or fail early on the first one.
>>>> 5. Running tests should be simple with a well-defined interface, and
>>>> the actual tests could be as simple or as complicated as necessary.
>>>> 
>>>> This seems like a very good solution from framework development.
>>>> 
>>>> I’m not sure how this concept could be used for application
>>>> development.  I guess an application developer could create a parallel
>>>> testing app which is the same as the app plus testing beads, but that
>>>> seems awkward.
>>>> 
>>>> Maybe it’s possible to use a testing CSS file which would add testing
>>>> beads to components for testing builds, the problem with that is that
>>>> there’s a requirement for code to add those beads.
>>>> 
>>>> Maybe we can add special tags for adding the beads via MXML and/or
>>>> ActionScript which could be stripped out for non-test builds.
>>>> 
>>>> Food for thought…
>>>> Harbs
>>> 
>> 
>


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Harbs <ha...@gmail.com>.
Lots of points here.

I’m not an expert either, but I’ll try to add my 2 cents…

> My temptation would be to leverage the [Mixin] capability in the compiler
> instead of additional/different CSS.  Then it is just a command-line
> option to inject a class that gets initialized early and can do other
> things (including bringing in additional/different CSS).  However, I have
> been considering some sort of compiler option that injects beads on the
> main application's strand.

This sounds very interesting.

That would sort of require a single bead attached to the application. It’s probably wor-able, but it makes fine grained testing a bit harder.

I wonder if we could utilize Mixin tags to add beads to classes and MXML files using the same compiler option. That would allow dividing the app into “units” of testing where the developer thinks it makes sense.


> I believe the component/framework testing must figure out how to run the
> next test step "later".  And that's hard in AS and JS.  Or else, we needs
> mocks or we restrict component tests to units that don't require any
> runtime support.  I'm not sure you can solve the "later" problem with
> beads, but it would be great if you can.

I think the later problem can be solved very nicely by beads. The bead could run tests at whatever point it wants. It could add an event listener to the strand and/or other beads to run specific tests at specific points.

It keeps track of all its tests and sends notification to the test runner when it’s done with the results and/or sends the results as the individual tests are run. The total number of tests could be set manually, or it could be calculated automatically by [Test] metadata tags.

> It also has to figure out how to
> handle the script timeout issue as well.  Once we decide on that, it just
> comes a matter of writing more tests.

I’m not sure what you mean by this. What timeouts are you concerned by?

> Since we are brainstorming, I want to mention that I have dreams of
> automatically generating tests from metadata.

Sounds like an interesting idea, but to be honest you lost me from the start… ;-)

I think these kinds of things are fundamentally incompatible with my brain, and I’ll probably have a hard time wrapping my head around this… ;-)

> On Nov 6, 2017, at 8:35 PM, Alex Harui <ah...@adobe.com.INVALID> wrote:
> 
> Disclaimer:  I am not an expert on automated testing, but I was involved
> in many discussions around the time Flex was donated to Apache.  So I have
> some knowledge, but it might be stale.  Here are some thoughts on this
> topic.
> 
> To respond to the subject:  as in the skinning/theming thread, I wouldn't
> worry about beads right now.  Beads are just encapsulations of code
> snippets.  In complex situations like these, it is often better just to
> "get the code to work", then get someone else to "get the code to work" in
> a different scenario and then see what needs to be parameterized and
> re-used.
> 
> I'm unclear as to how much we need to do along the lines of automated
> testing for Applications.  There are existing tools tuned for automating
> Application testing. It would be great to hear from users as to whether
> they have already chosen an automated testing tool for other Applications.
> Flex, for example, provided integration with the QTP testing system.
> Maybe people want us to leverage QTP or RIATest, or something else.  Also,
> Microsoft was trying to formalize automated testing for Windows app.  I
> don't know if our users are using that or not.
> 
> Microsoft was introducing the notion of "roles" as part of the WAI-ARIA
> standard [1] and building a test harness around that.  We've spent a
> little bit of time thinking about that in Royale.  The NumericStepper is
> no longer a single component like it was in Flex, but rather, two
> components (Input and up/down control) in order to conform to WAI-ARIA not
> just for testing but someday for accessibility.
> 
> Because of beads, there should be relatively few "private" parts to a
> component, so I don't know how much code will be needed to access things,
> especially in JS where nothing is truly private anyway.
> 
> Because of PAYG, we do want to have some other code set the additional
> information the automated testing tools need.  IIRC, not every tag in MXML
> needs to be tested, so adding a bead to specific MXML tags to mark them
> for the testing tools makes sense, but then you can't make it completely
> go away at runtime.
> 
> I often thought a key feature of PAYG and automated testing would be that,
> without touching the code, you could add some compiler option and inject
> all of the extra data.  I think this is technically possible, and I think
> this is what you are discussing in this thread, but I'm not sure if folks
> want that or not.  If you don't want to touch the code, managing an
> external map instead might be too painful.  Don't know, we should just try
> it.
> 
> My temptation would be to leverage the [Mixin] capability in the compiler
> instead of additional/different CSS.  Then it is just a command-line
> option to inject a class that gets initialized early and can do other
> things (including bringing in additional/different CSS).  However, I have
> been considering some sort of compiler option that injects beads on the
> main application's strand.
> 
> But the above is all about automated Application testing.  IMO,
> component/framework testing is different.
> 
> I believe the component/framework testing must figure out how to run the
> next test step "later".  And that's hard in AS and JS.  Or else, we needs
> mocks or we restrict component tests to units that don't require any
> runtime support.  I'm not sure you can solve the "later" problem with
> beads, but it would be great if you can.  It also has to figure out how to
> handle the script timeout issue as well.  Once we decide on that, it just
> comes a matter of writing more tests.
> 
> Since we are brainstorming, I want to mention that I have dreams of
> automatically generating tests from metadata.  Our framework code has very
> few functions/methods that are called by the Application developer.
> Instead, most of the code we write are functions as setters and getters,
> and event handlers.  Adding metadata to each of our functions seems way
> more efficient than writing tests for each one, and might help solve the
> "later" problem as the test harness could have control over when to make
> the function call and when to test for the results.
> 
> So, some getter could have metadata that is something like:
> 
> [Test[type="getter", initialValue="0", minValue="int.MIN_VALUE",
> maxValue="int.MAX_VALUE")]
> Function get value():int;
> 
> And that would generate several tests:
> 
>  var comp:Foo = new Foo();
>  Assert(comp.value, is(0))
> 
>  comp.foo = int.MIN_VALUE;
>  Assert(comp.value, is(int.MIN_VALUE));
> 
>  comp.foo = int.MAX_VALUE;
>  Assert(comp.value, is(int.MAX_VALUE));
> 
> And even, if we add more metadata about out-of-range:
> 
> [Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE",
> outOfRangeMin="exception")]
> Function get value():int;
> 
>  try {
>    comp.foo = -1; // (minValue - 1)
>  } catch (e:Error) {
>    Success();
>  }
>  Failure();
> 
> [Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE",
> outOfRangeMin="0")]
> Function get value():int;
> 
>  comp.foo = -1; // (minValue - 1)
>  Assert(comp.value, is(0))
> 
> 
> 
> An Event handler might look like:
> 
> [Test("eventType='org.apache.flex.events.MouseEvent', type="click",
> data="localx:0;localy:0" resultEvent="stateChange")]
> function clickHandler(e:MouseEvent):void
> {
> }
> 
> 
> 
> 
> 
> 
> And result in:
>  var comp:Foo = new Foo();
>  Var e:Event = new org.apache.flex.events.MouseEvent('click');
>  e["localx"] = 0;
>  e["localy"] = 0;
>  Comp.addEventListener("stateChange", genericEventListener);
>  comp.clickHandler(e);
>  AssertEvent(was(0))
> 
> 
> If we want to do integration testing that requires the runtime, we could
> add a "wait" tag to the metadata and the test engine would do what it
> needs to in order for the runtime to do some processing.
> 
> My 2 cents,
> -Alex
> 
> [1] https://www.w3.org/WAI/intro/aria
> 
> On 11/5/17, 1:14 AM, "Harbs" <ha...@gmail.com> wrote:
> 
>> I wanted to branch this into a separate discussion because I want to
>> discuss whether this is a good idea or a bad idea on its own.
>> 
>> Harbs
>>> On Nov 5, 2017, at 11:55 AM, Harbs <ha...@gmail.com> wrote:
>>> 
>>> I just had an interesting idea for solving the component testing
>>> problem in a Royale-specific way which might be a nice advantage over
>>> other frameworks:
>>> 
>>> Testing Beads.
>>> 
>>> The problem with component test seem to be the following:
>>> 1. Testing at the correct point in the component lifecycle.
>>> 2. Being able to address specific components and their parts.
>>> 3. Being able to fail-early on tests that don’t require complete
>>> loading.
>>> 4. Ensuring that all tests complete — which usually means synchronous
>>> execution of tests.
>>> 
>>> Testing beads seem like they should be able to solve these problems in
>>> an interesting way.
>>> 
>>> Basically, a testing bead would be a bead which has an interface which:
>>> a. Reports test passes.
>>> b. reports test failures.
>>> c. reports ignored test.
>>> d. Reports when all tests are done.
>>> 
>>> It would work something like this:
>>> 1. A test runner/test app, would create components and add testing
>>> beads to the components.
>>> 2. It would retain references to the testing beads and listen for
>>> results from the beads.
>>> 3. The test runner would run the app.
>>> 4. Each test bead would take care of running its own tests and report
>>> back when done.
>>> 5. Once all the test beads report success or a bead reports failure,
>>> the test runner would exit with the full report.
>>> 
>>> This would have the following advantages:
>>> 1. All tests could run in parallel. This would probably speed up test
>>> runs tremendously. Async operations would not block other tests from
>>> being run.
>>> 2. There’s no need for the test runner to worry about life-cycles. The
>>> bead would be responsible to test at the correct point in the lifecycle.
>>> 3. The first test to fail could exit. Failing early could make the test
>>> run much quicker when tests fail.
>>> 4. You could have an option to have the test runner either report all
>>> failing tests or fail early on the first one.
>>> 5. Running tests should be simple with a well-defined interface, and
>>> the actual tests could be as simple or as complicated as necessary.
>>> 
>>> This seems like a very good solution from framework development.
>>> 
>>> I’m not sure how this concept could be used for application
>>> development.  I guess an application developer could create a parallel
>>> testing app which is the same as the app plus testing beads, but that
>>> seems awkward.
>>> 
>>> Maybe it’s possible to use a testing CSS file which would add testing
>>> beads to components for testing builds, the problem with that is that
>>> there’s a requirement for code to add those beads.
>>> 
>>> Maybe we can add special tags for adding the beads via MXML and/or
>>> ActionScript which could be stripped out for non-test builds.
>>> 
>>> Food for thought…
>>> Harbs
>> 
> 


Re: Test Beads (was Re: Unit Tests et. al.)

Posted by Alex Harui <ah...@adobe.com.INVALID>.
Disclaimer:  I am not an expert on automated testing, but I was involved
in many discussions around the time Flex was donated to Apache.  So I have
some knowledge, but it might be stale.  Here are some thoughts on this
topic.

To respond to the subject:  as in the skinning/theming thread, I wouldn't
worry about beads right now.  Beads are just encapsulations of code
snippets.  In complex situations like these, it is often better just to
"get the code to work", then get someone else to "get the code to work" in
a different scenario and then see what needs to be parameterized and
re-used.

I'm unclear as to how much we need to do along the lines of automated
testing for Applications.  There are existing tools tuned for automating
Application testing. It would be great to hear from users as to whether
they have already chosen an automated testing tool for other Applications.
 Flex, for example, provided integration with the QTP testing system.
Maybe people want us to leverage QTP or RIATest, or something else.  Also,
Microsoft was trying to formalize automated testing for Windows app.  I
don't know if our users are using that or not.

Microsoft was introducing the notion of "roles" as part of the WAI-ARIA
standard [1] and building a test harness around that.  We've spent a
little bit of time thinking about that in Royale.  The NumericStepper is
no longer a single component like it was in Flex, but rather, two
components (Input and up/down control) in order to conform to WAI-ARIA not
just for testing but someday for accessibility.

Because of beads, there should be relatively few "private" parts to a
component, so I don't know how much code will be needed to access things,
especially in JS where nothing is truly private anyway.

Because of PAYG, we do want to have some other code set the additional
information the automated testing tools need.  IIRC, not every tag in MXML
needs to be tested, so adding a bead to specific MXML tags to mark them
for the testing tools makes sense, but then you can't make it completely
go away at runtime.

I often thought a key feature of PAYG and automated testing would be that,
without touching the code, you could add some compiler option and inject
all of the extra data.  I think this is technically possible, and I think
this is what you are discussing in this thread, but I'm not sure if folks
want that or not.  If you don't want to touch the code, managing an
external map instead might be too painful.  Don't know, we should just try
it.

My temptation would be to leverage the [Mixin] capability in the compiler
instead of additional/different CSS.  Then it is just a command-line
option to inject a class that gets initialized early and can do other
things (including bringing in additional/different CSS).  However, I have
been considering some sort of compiler option that injects beads on the
main application's strand.

But the above is all about automated Application testing.  IMO,
component/framework testing is different.

I believe the component/framework testing must figure out how to run the
next test step "later".  And that's hard in AS and JS.  Or else, we needs
mocks or we restrict component tests to units that don't require any
runtime support.  I'm not sure you can solve the "later" problem with
beads, but it would be great if you can.  It also has to figure out how to
handle the script timeout issue as well.  Once we decide on that, it just
comes a matter of writing more tests.

Since we are brainstorming, I want to mention that I have dreams of
automatically generating tests from metadata.  Our framework code has very
few functions/methods that are called by the Application developer.
Instead, most of the code we write are functions as setters and getters,
and event handlers.  Adding metadata to each of our functions seems way
more efficient than writing tests for each one, and might help solve the
"later" problem as the test harness could have control over when to make
the function call and when to test for the results.

So, some getter could have metadata that is something like:

[Test[type="getter", initialValue="0", minValue="int.MIN_VALUE",
maxValue="int.MAX_VALUE")]
Function get value():int;

And that would generate several tests:

  var comp:Foo = new Foo();
  Assert(comp.value, is(0))

  comp.foo = int.MIN_VALUE;
  Assert(comp.value, is(int.MIN_VALUE));

  comp.foo = int.MAX_VALUE;
  Assert(comp.value, is(int.MAX_VALUE));

And even, if we add more metadata about out-of-range:

[Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE",
outOfRangeMin="exception")]
Function get value():int;

  try {
    comp.foo = -1; // (minValue - 1)
  } catch (e:Error) {
    Success();
  }
  Failure();

[Test[initialValue="0", minValue="0", maxValue="int.MAX_VALUE",
outOfRangeMin="0")]
Function get value():int;

  comp.foo = -1; // (minValue - 1)
  Assert(comp.value, is(0))



An Event handler might look like:

[Test("eventType='org.apache.flex.events.MouseEvent', type="click",
data="localx:0;localy:0" resultEvent="stateChange")]
function clickHandler(e:MouseEvent):void
{
}






And result in:
  var comp:Foo = new Foo();
  Var e:Event = new org.apache.flex.events.MouseEvent('click');
  e["localx"] = 0;
  e["localy"] = 0;
  Comp.addEventListener("stateChange", genericEventListener);
  comp.clickHandler(e);
  AssertEvent(was(0))


If we want to do integration testing that requires the runtime, we could
add a "wait" tag to the metadata and the test engine would do what it
needs to in order for the runtime to do some processing.

My 2 cents,
-Alex

[1] https://www.w3.org/WAI/intro/aria

On 11/5/17, 1:14 AM, "Harbs" <ha...@gmail.com> wrote:

>I wanted to branch this into a separate discussion because I want to
>discuss whether this is a good idea or a bad idea on its own.
>
>Harbs
>> On Nov 5, 2017, at 11:55 AM, Harbs <ha...@gmail.com> wrote:
>> 
>> I just had an interesting idea for solving the component testing
>>problem in a Royale-specific way which might be a nice advantage over
>>other frameworks:
>> 
>> Testing Beads.
>> 
>> The problem with component test seem to be the following:
>> 1. Testing at the correct point in the component lifecycle.
>> 2. Being able to address specific components and their parts.
>> 3. Being able to fail-early on tests that don’t require complete
>>loading.
>> 4. Ensuring that all tests complete — which usually means synchronous
>>execution of tests.
>> 
>> Testing beads seem like they should be able to solve these problems in
>>an interesting way.
>> 
>> Basically, a testing bead would be a bead which has an interface which:
>> a. Reports test passes.
>> b. reports test failures.
>> c. reports ignored test.
>> d. Reports when all tests are done.
>> 
>> It would work something like this:
>> 1. A test runner/test app, would create components and add testing
>>beads to the components.
>> 2. It would retain references to the testing beads and listen for
>>results from the beads.
>> 3. The test runner would run the app.
>> 4. Each test bead would take care of running its own tests and report
>>back when done.
>> 5. Once all the test beads report success or a bead reports failure,
>>the test runner would exit with the full report.
>> 
>> This would have the following advantages:
>> 1. All tests could run in parallel. This would probably speed up test
>>runs tremendously. Async operations would not block other tests from
>>being run.
>> 2. There’s no need for the test runner to worry about life-cycles. The
>>bead would be responsible to test at the correct point in the lifecycle.
>> 3. The first test to fail could exit. Failing early could make the test
>>run much quicker when tests fail.
>> 4. You could have an option to have the test runner either report all
>>failing tests or fail early on the first one.
>> 5. Running tests should be simple with a well-defined interface, and
>>the actual tests could be as simple or as complicated as necessary.
>> 
>> This seems like a very good solution from framework development.
>> 
>> I’m not sure how this concept could be used for application
>>development.  I guess an application developer could create a parallel
>>testing app which is the same as the app plus testing beads, but that
>>seems awkward.
>> 
>> Maybe it’s possible to use a testing CSS file which would add testing
>>beads to components for testing builds, the problem with that is that
>>there’s a requirement for code to add those beads.
>> 
>> Maybe we can add special tags for adding the beads via MXML and/or
>>ActionScript which could be stripped out for non-test builds.
>> 
>> Food for thought…
>> Harbs
>