You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cordova.apache.org by Andrew Grieve <ag...@chromium.org> on 2012/09/11 04:44:03 UTC

Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Hey Joe,

Wondering why make this into a jasmine test? Does it make the results more
easily captured?

Other thing I'm wondering is if this should use some JS reflection to
detect the available bridge modes since they are different on iOS and
non-existant on others (mobile-spec tests are supposed to work on all
platforms correct?)




On Mon, Sep 10, 2012 at 5:18 PM, <bo...@apache.org> wrote:

> Adding different bridge benchmarking to the Automated Mobile Spec Tests
>
>
> Project:
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/repo
> Commit:
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/commit/019c43ef
> Tree:
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/tree/019c43ef
> Diff:
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/diff/019c43ef
>
> Branch: refs/heads/master
> Commit: 019c43ef8fe940289145c557bd669df60eb3b759
> Parents: 8cb36ea
> Author: Joe Bowser <bo...@apache.org>
> Authored: Mon Sep 10 14:17:31 2012 -0700
> Committer: Joe Bowser <bo...@apache.org>
> Committed: Mon Sep 10 14:17:31 2012 -0700
>
> ----------------------------------------------------------------------
>  autotest/index.html            |    1 +
>  autotest/pages/bridge.html     |   49 ++++++++++
>  autotest/tests/bridge.tests.js |  176 +++++++++++++++++++++++++++++++++++
>  3 files changed, 226 insertions(+), 0 deletions(-)
> ----------------------------------------------------------------------
>
>
>
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/blob/019c43ef/autotest/index.html
> ----------------------------------------------------------------------
> diff --git a/autotest/index.html b/autotest/index.html
> index 70a1eeb..48fe722 100755
> --- a/autotest/index.html
> +++ b/autotest/index.html
> @@ -28,6 +28,7 @@
>      <a href="pages/notification.html" class="btn large"
> style="width:100%;">Run Notification Tests</a>
>      <a href="pages/platform.html" class="btn large"
> style="width:100%;">Run Platform Tests</a>
>      <a href="pages/storage.html" class="btn large"
> style="width:100%;">Run Storage Tests</a>
> +    <a href="pages/bridge.html" class="btn large" style="width:100%;">Run
> Bridge Tests</a>
>
>      <h2> </h2><div class="backBtn" onclick="backHome();">Back</div>
>    </body>
>
>
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/blob/019c43ef/autotest/pages/bridge.html
> ----------------------------------------------------------------------
> diff --git a/autotest/pages/bridge.html b/autotest/pages/bridge.html
> new file mode 100644
> index 0000000..56d2d59
> --- /dev/null
> +++ b/autotest/pages/bridge.html
> @@ -0,0 +1,49 @@
> +<!DOCTYPE html>
> +<html>
> +
> +<head>
> +  <title>Cordova: Device API Specs</title>
> +
> +  <meta name="viewport" content="width=device-width,
> height=device-height, user-scalable=yes, initial-scale=1.0;" />
> +  <!-- Load jasmine -->
> +  <link href="../jasmine.css" rel="stylesheet"/>
> +  <script type="text/javascript" src="../jasmine.js"></script>
> +  <script type="text/javascript"
> src="../html/HtmlReporterHelpers.js"></script>
> +  <script type="text/javascript" src="../html/HtmlReporter.js"></script>
> +  <script type="text/javascript" src="../html/ReporterView.js"></script>
> +  <script type="text/javascript" src="../html/SpecView.js"></script>
> +  <script type="text/javascript" src="../html/SuiteView.js"></script>
> +  <script type="text/javascript"
> src="../html/TrivialReporter.js"></script>
> +
> +  <!-- Source -->
> +  <script type="text/javascript" src="../../cordova.js"></script>
> +
> +  <!-- Load Test Runner -->
> +  <script type="text/javascript" src="../test-runner.js"></script>
> +
> +  <!-- Tests -->
> +  <script type="text/javascript" src="../tests/bridge.tests.js"></script>
> +
> +  <script type="text/javascript">
> +    document.addEventListener('deviceready', function () {
> +      var jasmineEnv = jasmine.getEnv();
> +      jasmineEnv.updateInterval = 1000;
> +
> +      var htmlReporter = new jasmine.HtmlReporter();
> +
> +      jasmineEnv.addReporter(htmlReporter);
> +
> +      jasmineEnv.specFilter = function(spec) {
> +        return htmlReporter.specFilter(spec);
> +      };
> +
> +      jasmineEnv.execute();
> +    }, false);
> +  </script>
> +</head>
> +
> +<body>
> +  <a href="javascript:" class="backBtn" onclick="backHome();">Back</a>
> +</body>
> +</html>
> +
>
>
> http://git-wip-us.apache.org/repos/asf/incubator-cordova-mobile-spec/blob/019c43ef/autotest/tests/bridge.tests.js
> ----------------------------------------------------------------------
> diff --git a/autotest/tests/bridge.tests.js
> b/autotest/tests/bridge.tests.js
> new file mode 100644
> index 0000000..5a700e0
> --- /dev/null
> +++ b/autotest/tests/bridge.tests.js
> @@ -0,0 +1,176 @@
> +/* This test requires some extra code to run, because we want benchmark
> results */
> +
> +/*
> + It's never going to be OVER 9000
> + http://youtu.be/SiMHTK15Pik
> +*/
> +var FENCEPOST = 9000;
> +
> +var exec = cordova.require('cordova/exec');
> +
> +var echo = cordova.require('cordova/plugin/echo'),
> +            startTime = +new Date,
> +            callCount = 0,
> +            durationMs = 1000,
> +            asyncEcho = true,
> +            useSetTimeout = true,
> +            payloadSize = 5,
> +            callsPerSecond = 0,
> +            completeSpy = null,
> +            payload = new Array(payloadSize * 10 + 1).join('012\n\n
> 6789');
> +
> +var vanillaWin = function(result) {
> +            callCount++;
> +            if (result != payload) {
> +                console.log('Wrong echo data!');
> +            }
> +            var elapsedMs = new Date - startTime;
> +            if (elapsedMs < durationMs) {
> +                if (useSetTimeout) {
> +                    setTimeout(echoMessage, 0);
> +                } else {
> +                    echoMessage();
> +                }
> +            } else {
> +               callsPerSecond = callCount * 1000 / elapsedMs;
> +               console.log('Calls per second: ' + callsPerSecond);
> +               if(completeSpy != null)
> +                completeSpy();
> +            }
> +        }
> +
> +var reset = function()
> +{
> +            startTime = +new Date,
> +            callCount = 0,
> +            durationMs = 1000,
> +            asyncEcho = true,
> +            useSetTimeout = true,
> +            payloadSize = 5,
> +            callsPerSecond = 0,
> +            completeSpy = null,
> +            payload = new Array(payloadSize * 10 + 1).join('012\n\n
> 6789');
> +}
> +
> +var echoMessage = function()
> +{
> +    echo(vanillaWin, fail, payload, asyncEcho);
> +}
> +
> +var fail = jasmine.createSpy();
> +
> +describe('The JS to Native Bridge', function() {
> +
> +    //Run the reset
> +    beforeEach(function() {
> +        reset();
> +    });
> +
> +    it('should work with prompt', function() {
> +        exec.setJsToNativeBridgeMode(0);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +    it("should work with jsObject", function() {
> +        exec.setJsToNativeBridgeMode(1);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +});
> +
> +describe("The Native to JS Bridge", function() {
> +
> +    //Run the reset
> +    beforeEach(function() {
> +        reset();
> +    });
> +
> +    it("should work with polling", function() {
> +       exec.setNativeToJsBridgeMode(0);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +    it("should work with hanging get", function() {
> +        exec.setNativeToJsBridgeMode(1);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +    it("should work with load_url (not on emulator)", function() {
> +       exec.setNativeToJsBridgeMode(2);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +    it("should work with online event", function() {
> +        exec.setNativeToJsBridgeMode(3);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +    it("should work with the private api", function() {
> +        exec.setNativeToJsBridgeMode(4);
> +        var win = jasmine.createSpy().andCallFake(function(r) {
> +            vanillaWin(r);
> +        });
> +        completeSpy = jasmine.createSpy();
> +        runs(function() {
> +            echo(win, fail, payload, asyncEcho);
> +        });
> +        waitsFor(function() { return completeSpy.wasCalled; }, "never
> completed", durationMs * 2);
> +        runs(function() {
> +            expect(callsPerSecond).toBeGreaterThan(FENCEPOST);
> +        });
> +    });
> +});
>
>

Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Posted by Brian LeRoux <b...@brian.io>.
personally don't care where the code lives either way the consistent
aggregate is what makes the cordova 'platform' valuable ---and havin a
bench in the same spot will help us better tune how we view changes
that could impact across each platform

On Tue, Sep 11, 2012 at 2:38 PM, Filip Maj <fi...@adobe.com> wrote:
> I disagree.
>
> Bridge interface is identical across platforms (cordova.exec) and a
> platform-agnostic test working against the exec interface, comparing
> relative performance/correctness of each underlying implementation is a
> perfectly reasonable, and obviously useful, test to have around.
>
> On 9/11/12 2:37 PM, "Jesse MacFadyen" <pu...@gmail.com> wrote:
>
>>-1
>>These tests should live in their respective platforms. Exposing it as
>>an API gives away our sausage recipe, and no-one should ever care,
>>outside of the bridge developer.
>>
>>Cheers,
>>  Jesse
>>
>>
>>On 2012-09-11, at 2:21 PM, Filip Maj <fi...@adobe.com> wrote:
>>
>>> Nice work on that Joe.
>>>
>>> I definitely support enumerating the bridge modes.
>>>
>>> I'm thinking this should be a standard field that platforms can override
>>> on a per-platform basis. In the top-level "cordova" module perhaps?
>>>
>>> On 9/11/12 6:26 AM, "Andrew Grieve" <ag...@chromium.org> wrote:
>>>
>>>> On Tue, Sep 11, 2012 at 1:19 AM, Joe Bowser <bo...@gmail.com> wrote:
>>>>
>>>>> Hey
>>>>>
>>>>> Responses inline:
>>>>>
>>>>> On Mon, Sep 10, 2012 at 7:44 PM, Andrew Grieve <ag...@chromium.org>
>>>>> wrote:
>>>>>> Hey Joe,
>>>>>>
>>>>>> Wondering why make this into a jasmine test? Does it make the results
>>>>> more
>>>>>> easily captured?
>>>>>
>>>>> Yes, it also makes other known bugs glaringly obvious, like the
>>>>> numerous bugs with JS_OBJECT and the Callback Server. Now, instead of
>>>>> having to go through repro steps, I can just run this test.
>>>>>
>>>>> It also makes it easier to run a small amount of tests on a wide range
>>>>> of devices quickly instead of manually having to pick modes, and it in
>>>>> theory could work with the Continuous Integration that we're hoping to
>>>>> have in our office as well. I was doing testing on the HTC One X that
>>>>> arrived on my desk and my results looked different enough from the
>>>>> Galaxy Nexus that I wanted this. I was able to run through a
>>>>> half-dozen Android devices to see if the results on this end were
>>>>> similar to the ones that you had in the ticket.
>>>>
>>>> That's awesome!!
>>>>
>>>>
>>>>>
>>>>>> Other thing I'm wondering is if this should use some JS reflection to
>>>>> detect
>>>>>> the available bridge modes since they are different on iOS and
>>>>> non-existant
>>>>>> on others (mobile-spec tests are supposed to work on all platforms
>>>>> correct?)
>>>>>
>>>>> It probably would make sense for the bridges to be enumerated for
>>>>> readability.  So far, only iOS and Android have configurable bridges,
>>>>> right?  I think this test make sense here, but not added to the "Run
>>>>> All Tests" page.
>>>>
>>>> Sounds good.
>>>
>

Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Posted by Filip Maj <fi...@adobe.com>.
I disagree.

Bridge interface is identical across platforms (cordova.exec) and a
platform-agnostic test working against the exec interface, comparing
relative performance/correctness of each underlying implementation is a
perfectly reasonable, and obviously useful, test to have around.

On 9/11/12 2:37 PM, "Jesse MacFadyen" <pu...@gmail.com> wrote:

>-1
>These tests should live in their respective platforms. Exposing it as
>an API gives away our sausage recipe, and no-one should ever care,
>outside of the bridge developer.
>
>Cheers,
>  Jesse
>
>
>On 2012-09-11, at 2:21 PM, Filip Maj <fi...@adobe.com> wrote:
>
>> Nice work on that Joe.
>>
>> I definitely support enumerating the bridge modes.
>>
>> I'm thinking this should be a standard field that platforms can override
>> on a per-platform basis. In the top-level "cordova" module perhaps?
>>
>> On 9/11/12 6:26 AM, "Andrew Grieve" <ag...@chromium.org> wrote:
>>
>>> On Tue, Sep 11, 2012 at 1:19 AM, Joe Bowser <bo...@gmail.com> wrote:
>>>
>>>> Hey
>>>>
>>>> Responses inline:
>>>>
>>>> On Mon, Sep 10, 2012 at 7:44 PM, Andrew Grieve <ag...@chromium.org>
>>>> wrote:
>>>>> Hey Joe,
>>>>>
>>>>> Wondering why make this into a jasmine test? Does it make the results
>>>> more
>>>>> easily captured?
>>>>
>>>> Yes, it also makes other known bugs glaringly obvious, like the
>>>> numerous bugs with JS_OBJECT and the Callback Server. Now, instead of
>>>> having to go through repro steps, I can just run this test.
>>>>
>>>> It also makes it easier to run a small amount of tests on a wide range
>>>> of devices quickly instead of manually having to pick modes, and it in
>>>> theory could work with the Continuous Integration that we're hoping to
>>>> have in our office as well. I was doing testing on the HTC One X that
>>>> arrived on my desk and my results looked different enough from the
>>>> Galaxy Nexus that I wanted this. I was able to run through a
>>>> half-dozen Android devices to see if the results on this end were
>>>> similar to the ones that you had in the ticket.
>>>
>>> That's awesome!!
>>>
>>>
>>>>
>>>>> Other thing I'm wondering is if this should use some JS reflection to
>>>> detect
>>>>> the available bridge modes since they are different on iOS and
>>>> non-existant
>>>>> on others (mobile-spec tests are supposed to work on all platforms
>>>> correct?)
>>>>
>>>> It probably would make sense for the bridges to be enumerated for
>>>> readability.  So far, only iOS and Android have configurable bridges,
>>>> right?  I think this test make sense here, but not added to the "Run
>>>> All Tests" page.
>>>
>>> Sounds good.
>>


Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Posted by Jesse MacFadyen <pu...@gmail.com>.
-1
These tests should live in their respective platforms. Exposing it as
an API gives away our sausage recipe, and no-one should ever care,
outside of the bridge developer.

Cheers,
  Jesse


On 2012-09-11, at 2:21 PM, Filip Maj <fi...@adobe.com> wrote:

> Nice work on that Joe.
>
> I definitely support enumerating the bridge modes.
>
> I'm thinking this should be a standard field that platforms can override
> on a per-platform basis. In the top-level "cordova" module perhaps?
>
> On 9/11/12 6:26 AM, "Andrew Grieve" <ag...@chromium.org> wrote:
>
>> On Tue, Sep 11, 2012 at 1:19 AM, Joe Bowser <bo...@gmail.com> wrote:
>>
>>> Hey
>>>
>>> Responses inline:
>>>
>>> On Mon, Sep 10, 2012 at 7:44 PM, Andrew Grieve <ag...@chromium.org>
>>> wrote:
>>>> Hey Joe,
>>>>
>>>> Wondering why make this into a jasmine test? Does it make the results
>>> more
>>>> easily captured?
>>>
>>> Yes, it also makes other known bugs glaringly obvious, like the
>>> numerous bugs with JS_OBJECT and the Callback Server. Now, instead of
>>> having to go through repro steps, I can just run this test.
>>>
>>> It also makes it easier to run a small amount of tests on a wide range
>>> of devices quickly instead of manually having to pick modes, and it in
>>> theory could work with the Continuous Integration that we're hoping to
>>> have in our office as well. I was doing testing on the HTC One X that
>>> arrived on my desk and my results looked different enough from the
>>> Galaxy Nexus that I wanted this. I was able to run through a
>>> half-dozen Android devices to see if the results on this end were
>>> similar to the ones that you had in the ticket.
>>
>> That's awesome!!
>>
>>
>>>
>>>> Other thing I'm wondering is if this should use some JS reflection to
>>> detect
>>>> the available bridge modes since they are different on iOS and
>>> non-existant
>>>> on others (mobile-spec tests are supposed to work on all platforms
>>> correct?)
>>>
>>> It probably would make sense for the bridges to be enumerated for
>>> readability.  So far, only iOS and Android have configurable bridges,
>>> right?  I think this test make sense here, but not added to the "Run
>>> All Tests" page.
>>
>> Sounds good.
>

Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Posted by Filip Maj <fi...@adobe.com>.
Nice work on that Joe.

I definitely support enumerating the bridge modes.

I'm thinking this should be a standard field that platforms can override
on a per-platform basis. In the top-level "cordova" module perhaps?

On 9/11/12 6:26 AM, "Andrew Grieve" <ag...@chromium.org> wrote:

>On Tue, Sep 11, 2012 at 1:19 AM, Joe Bowser <bo...@gmail.com> wrote:
>
>> Hey
>>
>> Responses inline:
>>
>> On Mon, Sep 10, 2012 at 7:44 PM, Andrew Grieve <ag...@chromium.org>
>> wrote:
>> > Hey Joe,
>> >
>> > Wondering why make this into a jasmine test? Does it make the results
>> more
>> > easily captured?
>> >
>>
>> Yes, it also makes other known bugs glaringly obvious, like the
>> numerous bugs with JS_OBJECT and the Callback Server. Now, instead of
>> having to go through repro steps, I can just run this test.
>>
>> It also makes it easier to run a small amount of tests on a wide range
>> of devices quickly instead of manually having to pick modes, and it in
>> theory could work with the Continuous Integration that we're hoping to
>> have in our office as well. I was doing testing on the HTC One X that
>> arrived on my desk and my results looked different enough from the
>> Galaxy Nexus that I wanted this. I was able to run through a
>> half-dozen Android devices to see if the results on this end were
>> similar to the ones that you had in the ticket.
>>
>
>That's awesome!!
>
>
>>
>> > Other thing I'm wondering is if this should use some JS reflection to
>> detect
>> > the available bridge modes since they are different on iOS and
>> non-existant
>> > on others (mobile-spec tests are supposed to work on all platforms
>> correct?)
>>
>> It probably would make sense for the bridges to be enumerated for
>> readability.  So far, only iOS and Android have configurable bridges,
>> right?  I think this test make sense here, but not added to the "Run
>> All Tests" page.
>>
>
>Sounds good.


Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Posted by Andrew Grieve <ag...@chromium.org>.
On Tue, Sep 11, 2012 at 1:19 AM, Joe Bowser <bo...@gmail.com> wrote:

> Hey
>
> Responses inline:
>
> On Mon, Sep 10, 2012 at 7:44 PM, Andrew Grieve <ag...@chromium.org>
> wrote:
> > Hey Joe,
> >
> > Wondering why make this into a jasmine test? Does it make the results
> more
> > easily captured?
> >
>
> Yes, it also makes other known bugs glaringly obvious, like the
> numerous bugs with JS_OBJECT and the Callback Server. Now, instead of
> having to go through repro steps, I can just run this test.
>
> It also makes it easier to run a small amount of tests on a wide range
> of devices quickly instead of manually having to pick modes, and it in
> theory could work with the Continuous Integration that we're hoping to
> have in our office as well. I was doing testing on the HTC One X that
> arrived on my desk and my results looked different enough from the
> Galaxy Nexus that I wanted this. I was able to run through a
> half-dozen Android devices to see if the results on this end were
> similar to the ones that you had in the ticket.
>

That's awesome!!


>
> > Other thing I'm wondering is if this should use some JS reflection to
> detect
> > the available bridge modes since they are different on iOS and
> non-existant
> > on others (mobile-spec tests are supposed to work on all platforms
> correct?)
>
> It probably would make sense for the bridges to be enumerated for
> readability.  So far, only iOS and Android have configurable bridges,
> right?  I think this test make sense here, but not added to the "Run
> All Tests" page.
>

Sounds good.

Re: [2/2] spec commit: Adding different bridge benchmarking to the Automated Mobile Spec Tests

Posted by Joe Bowser <bo...@gmail.com>.
Hey

Responses inline:

On Mon, Sep 10, 2012 at 7:44 PM, Andrew Grieve <ag...@chromium.org> wrote:
> Hey Joe,
>
> Wondering why make this into a jasmine test? Does it make the results more
> easily captured?
>

Yes, it also makes other known bugs glaringly obvious, like the
numerous bugs with JS_OBJECT and the Callback Server. Now, instead of
having to go through repro steps, I can just run this test.

It also makes it easier to run a small amount of tests on a wide range
of devices quickly instead of manually having to pick modes, and it in
theory could work with the Continuous Integration that we're hoping to
have in our office as well. I was doing testing on the HTC One X that
arrived on my desk and my results looked different enough from the
Galaxy Nexus that I wanted this. I was able to run through a
half-dozen Android devices to see if the results on this end were
similar to the ones that you had in the ticket.

> Other thing I'm wondering is if this should use some JS reflection to detect
> the available bridge modes since they are different on iOS and non-existant
> on others (mobile-spec tests are supposed to work on all platforms correct?)

It probably would make sense for the bridges to be enumerated for
readability.  So far, only iOS and Android have configurable bridges,
right?  I think this test make sense here, but not added to the "Run
All Tests" page.