You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cordova.apache.org by Lisa Seacat DeLuca <ld...@us.ibm.com> on 2013/06/20 14:44:07 UTC

Opinions Needed: Platform specific features and mobilespec tests

One issue I ran with respects to the mobile spec is some tests are only 
applicable to certain device types.  We have a couple options when it 
comes to those types of tests:

1. Not include them in the automated tests
2. Include them knowing that they *might* cause failures with certain 
device types (see example)
3. Add javascript logic to check for device type before performing the 
tests
4. OR we could create platform specific automated tests that should be ran 
in addition to the base automated test per device. ex. automatedAndroid, 
automatedIOS, etc.

An example is:
https://issues.apache.org/jira/browse/CB-3484
camera.cleanup is only supported on iOS.

I added a test case to verify that the function existed.  But it doesn't 
actually run camera.cleanup so there are no failure on other platforms. So 
really there shouldn't be any harm in keeping the test.


What are everyone's opinions on a good approach to handle this type of 
situation?

Lisa Seacat DeLuca

RE: Opinions Needed: Platform specific features and mobilespec tests

Posted by "Li, Jonathan" <jo...@sap.com>.
Probably call the test failure for the unimplemented feature as "Unimplemented Failure", and show in jasmine using a different color. Those failure are like a warning, and means the feature is expected to be supported by the platform, but is just not yet implemented. 

"Expected failure" is little confusing, as if when "expected failure" happens, it means a success test case.

Jonathan

-----Original Message-----
From: Ian Clelland [mailto:iclelland@google.com] 
Sent: Friday, June 21, 2013 8:35 AM
To: dev@cordova.apache.org
Subject: Re: Opinions Needed: Platform specific features and mobilespec tests

For tests like this, I'd like to see something in Jasmine that is akin to
the "Expected Failure" result in JUinit / python unittest.

It means that we still run all of the tests, but a failure on a device that
doesn't support the feature doesn't cause the whole test suite to turn red.

On the other hand, if a test which is expected to fail actually succeeds,
that is reported as "unexpected success" in the test output. We can then go
and look at what has changed -- either the test is broken, or the issue was
actually resolved.

I don't think it's available as an idiom in Jasmine, but it's just
JavaScript; it shouldn't be too hard to implement.

Ian

> On 13-06-20 9:06 AM, "Andrew Grieve" <ag...@chromium.org> wrote:

> >
> > Definitely torn on this one. On one hand, if there are features
> > implemented
> > on some platforms that should be implemented on others than having them
> > fail is a constant reminder that your platform needs to implement the
> > missing functionality. OTOH, things like camera clean-up are meant to be
> > platform specific, so it's nothing but an annoyance if that fails on
> other
> > platforms.
> >
> > So, I think my take on it is:
> >
> > 1. Have them shared and failing if the API should eventually be
> > implemented
> > on all platforms
> > 2. Wrap tests in if (platform.name == 'ios') {} if they are meant to
> only
> > work on one platform.
> >
> >
> >
> >
> >
> >
> > On Thu, Jun 20, 2013 at 8:44 AM, Lisa Seacat DeLuca
> > <ld...@us.ibm.com>wrote:
> >
> >> One issue I ran with respects to the mobile spec is some tests are only
> >> applicable to certain device types.  We have a couple options when it
> >> comes to those types of tests:
> >>
> >> 1. Not include them in the automated tests
> >> 2. Include them knowing that they *might* cause failures with certain
> >> device types (see example)
> >> 3. Add javascript logic to check for device type before performing the
> >> tests
> >> 4. OR we could create platform specific automated tests that should be
> >> ran
> >> in addition to the base automated test per device. ex. automatedAndroid,
> >> automatedIOS, etc.
> >>
> >> An example is:
> >> https://issues.apache.org/jira/browse/CB-3484
> >> camera.cleanup is only supported on iOS.
> >>
> >> I added a test case to verify that the function existed.  But it doesn't
> >> actually run camera.cleanup so there are no failure on other platforms.
> >> So
> >> really there shouldn't be any harm in keeping the test.
> >>
> >>
> >> What are everyone's opinions on a good approach to handle this type of
> >> situation?
> >>
> >> Lisa Seacat DeLuca
>
>
> ---------------------------------------------------------------------
> This transmission (including any attachments) may contain confidential
> information, privileged material (including material protected by the
> solicitor-client or other applicable privileges), or constitute
> non-public information. Any use of this information by anyone other
> than the intended recipient is prohibited. If you have received this
> transmission in error, please immediately reply to the sender and
> delete this information from your system. Use, dissemination,
> distribution, or reproduction of this transmission by unintended
> recipients is not authorized and may be unlawful.
>

Re: Opinions Needed: Platform specific features and mobilespec tests

Posted by Ian Clelland <ic...@google.com>.
For tests like this, I'd like to see something in Jasmine that is akin to
the "Expected Failure" result in JUinit / python unittest.

It means that we still run all of the tests, but a failure on a device that
doesn't support the feature doesn't cause the whole test suite to turn red.

On the other hand, if a test which is expected to fail actually succeeds,
that is reported as "unexpected success" in the test output. We can then go
and look at what has changed -- either the test is broken, or the issue was
actually resolved.

I don't think it's available as an idiom in Jasmine, but it's just
JavaScript; it shouldn't be too hard to implement.

Ian

> On 13-06-20 9:06 AM, "Andrew Grieve" <ag...@chromium.org> wrote:

> >
> > Definitely torn on this one. On one hand, if there are features
> > implemented
> > on some platforms that should be implemented on others than having them
> > fail is a constant reminder that your platform needs to implement the
> > missing functionality. OTOH, things like camera clean-up are meant to be
> > platform specific, so it's nothing but an annoyance if that fails on
> other
> > platforms.
> >
> > So, I think my take on it is:
> >
> > 1. Have them shared and failing if the API should eventually be
> > implemented
> > on all platforms
> > 2. Wrap tests in if (platform.name == 'ios') {} if they are meant to
> only
> > work on one platform.
> >
> >
> >
> >
> >
> >
> > On Thu, Jun 20, 2013 at 8:44 AM, Lisa Seacat DeLuca
> > <ld...@us.ibm.com>wrote:
> >
> >> One issue I ran with respects to the mobile spec is some tests are only
> >> applicable to certain device types.  We have a couple options when it
> >> comes to those types of tests:
> >>
> >> 1. Not include them in the automated tests
> >> 2. Include them knowing that they *might* cause failures with certain
> >> device types (see example)
> >> 3. Add javascript logic to check for device type before performing the
> >> tests
> >> 4. OR we could create platform specific automated tests that should be
> >> ran
> >> in addition to the base automated test per device. ex. automatedAndroid,
> >> automatedIOS, etc.
> >>
> >> An example is:
> >> https://issues.apache.org/jira/browse/CB-3484
> >> camera.cleanup is only supported on iOS.
> >>
> >> I added a test case to verify that the function existed.  But it doesn't
> >> actually run camera.cleanup so there are no failure on other platforms.
> >> So
> >> really there shouldn't be any harm in keeping the test.
> >>
> >>
> >> What are everyone's opinions on a good approach to handle this type of
> >> situation?
> >>
> >> Lisa Seacat DeLuca
>
>
> ---------------------------------------------------------------------
> This transmission (including any attachments) may contain confidential
> information, privileged material (including material protected by the
> solicitor-client or other applicable privileges), or constitute
> non-public information. Any use of this information by anyone other
> than the intended recipient is prohibited. If you have received this
> transmission in error, please immediately reply to the sender and
> delete this information from your system. Use, dissemination,
> distribution, or reproduction of this transmission by unintended
> recipients is not authorized and may be unlawful.
>

Re: Opinions Needed: Platform specific features and mobilespec tests

Posted by Jesse MacFadyen <pu...@gmail.com>.
My 2 cents
- APIs that should exist on all devices go in mobile spec
- Platform specific ones go in each platforms repo

I assume this is a vanishing issue as all mobile spec test will be
broken out into each plugin repo, and would be easily run by mobile
spec.

Cheers,
  Jesse

Sent from my iPhone5

On Jun 20, 2013, at 7:27 AM, Jeffrey Heifetz <jh...@blackberry.com> wrote:

+1

> On 13-06-20 9:06 AM, "Andrew Grieve" <ag...@chromium.org> wrote:
>
> Definitely torn on this one. On one hand, if there are features
> implemented
> on some platforms that should be implemented on others than having them
> fail is a constant reminder that your platform needs to implement the
> missing functionality. OTOH, things like camera clean-up are meant to be
> platform specific, so it's nothing but an annoyance if that fails on other
> platforms.
>
> So, I think my take on it is:
>
> 1. Have them shared and failing if the API should eventually be
> implemented
> on all platforms
> 2. Wrap tests in if (platform.name == 'ios') {} if they are meant to only
> work on one platform.
>
>
>
>
>
>
> On Thu, Jun 20, 2013 at 8:44 AM, Lisa Seacat DeLuca
> <ld...@us.ibm.com>wrote:
>
>> One issue I ran with respects to the mobile spec is some tests are only
>> applicable to certain device types.  We have a couple options when it
>> comes to those types of tests:
>>
>> 1. Not include them in the automated tests
>> 2. Include them knowing that they *might* cause failures with certain
>> device types (see example)
>> 3. Add javascript logic to check for device type before performing the
>> tests
>> 4. OR we could create platform specific automated tests that should be
>> ran
>> in addition to the base automated test per device. ex. automatedAndroid,
>> automatedIOS, etc.
>>
>> An example is:
>> https://issues.apache.org/jira/browse/CB-3484
>> camera.cleanup is only supported on iOS.
>>
>> I added a test case to verify that the function existed.  But it doesn't
>> actually run camera.cleanup so there are no failure on other platforms.
>> So
>> really there shouldn't be any harm in keeping the test.
>>
>>
>> What are everyone's opinions on a good approach to handle this type of
>> situation?
>>
>> Lisa Seacat DeLuca


---------------------------------------------------------------------
This transmission (including any attachments) may contain confidential
information, privileged material (including material protected by the
solicitor-client or other applicable privileges), or constitute
non-public information. Any use of this information by anyone other
than the intended recipient is prohibited. If you have received this
transmission in error, please immediately reply to the sender and
delete this information from your system. Use, dissemination,
distribution, or reproduction of this transmission by unintended
recipients is not authorized and may be unlawful.

Re: Opinions Needed: Platform specific features and mobilespec tests

Posted by Jeffrey Heifetz <jh...@blackberry.com>.
+1

On 13-06-20 9:06 AM, "Andrew Grieve" <ag...@chromium.org> wrote:

>Definitely torn on this one. On one hand, if there are features
>implemented
>on some platforms that should be implemented on others than having them
>fail is a constant reminder that your platform needs to implement the
>missing functionality. OTOH, things like camera clean-up are meant to be
>platform specific, so it's nothing but an annoyance if that fails on other
>platforms.
>
>So, I think my take on it is:
>
>1. Have them shared and failing if the API should eventually be
>implemented
>on all platforms
>2. Wrap tests in if (platform.name == 'ios') {} if they are meant to only
>work on one platform.
>
>
>
>
>
>
>On Thu, Jun 20, 2013 at 8:44 AM, Lisa Seacat DeLuca
><ld...@us.ibm.com>wrote:
>
>> One issue I ran with respects to the mobile spec is some tests are only
>> applicable to certain device types.  We have a couple options when it
>> comes to those types of tests:
>>
>> 1. Not include them in the automated tests
>> 2. Include them knowing that they *might* cause failures with certain
>> device types (see example)
>> 3. Add javascript logic to check for device type before performing the
>> tests
>> 4. OR we could create platform specific automated tests that should be
>>ran
>> in addition to the base automated test per device. ex. automatedAndroid,
>> automatedIOS, etc.
>>
>> An example is:
>> https://issues.apache.org/jira/browse/CB-3484
>> camera.cleanup is only supported on iOS.
>>
>> I added a test case to verify that the function existed.  But it doesn't
>> actually run camera.cleanup so there are no failure on other platforms.
>>So
>> really there shouldn't be any harm in keeping the test.
>>
>>
>> What are everyone's opinions on a good approach to handle this type of
>> situation?
>>
>> Lisa Seacat DeLuca


---------------------------------------------------------------------
This transmission (including any attachments) may contain confidential information, privileged material (including material protected by the solicitor-client or other applicable privileges), or constitute non-public information. Any use of this information by anyone other than the intended recipient is prohibited. If you have received this transmission in error, please immediately reply to the sender and delete this information from your system. Use, dissemination, distribution, or reproduction of this transmission by unintended recipients is not authorized and may be unlawful.

Re: Opinions Needed: Platform specific features and mobilespec tests

Posted by Andrew Grieve <ag...@chromium.org>.
Definitely torn on this one. On one hand, if there are features implemented
on some platforms that should be implemented on others than having them
fail is a constant reminder that your platform needs to implement the
missing functionality. OTOH, things like camera clean-up are meant to be
platform specific, so it's nothing but an annoyance if that fails on other
platforms.

So, I think my take on it is:

1. Have them shared and failing if the API should eventually be implemented
on all platforms
2. Wrap tests in if (platform.name == 'ios') {} if they are meant to only
work on one platform.






On Thu, Jun 20, 2013 at 8:44 AM, Lisa Seacat DeLuca <ld...@us.ibm.com>wrote:

> One issue I ran with respects to the mobile spec is some tests are only
> applicable to certain device types.  We have a couple options when it
> comes to those types of tests:
>
> 1. Not include them in the automated tests
> 2. Include them knowing that they *might* cause failures with certain
> device types (see example)
> 3. Add javascript logic to check for device type before performing the
> tests
> 4. OR we could create platform specific automated tests that should be ran
> in addition to the base automated test per device. ex. automatedAndroid,
> automatedIOS, etc.
>
> An example is:
> https://issues.apache.org/jira/browse/CB-3484
> camera.cleanup is only supported on iOS.
>
> I added a test case to verify that the function existed.  But it doesn't
> actually run camera.cleanup so there are no failure on other platforms. So
> really there shouldn't be any harm in keeping the test.
>
>
> What are everyone's opinions on a good approach to handle this type of
> situation?
>
> Lisa Seacat DeLuca