You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@shindig.apache.org by John Hjelmstad <fa...@google.com> on 2009/06/11 03:57:30 UTC

rpc.js wire compatibility

Changing subject of this sub-thread to separate discussions.
Basically, ensuring gadget and container have the same rpc.js is the central
challenge to deploying the library. So long as they run the same version,
they'll have the same getRelayChannel() implementation and thus the same
transport selection/initialization/call logic on both sides. I don't know of
any way that one transport would ever talk to another, so the best we can do
in such failure cases is to fall back to some common transport that all
browsers support. So it's critically important that integrations happen
properly. It just doesn't work for containers to cache some stale old
version of rpc.js if the library is changing.

Re: a common transport fallback for all browsers: The idea would be to
dynamically insert a script tag that loads the fallback tx and initializes
it. But given all the possible execution scenarios and initialization
issues, I want to start by seeing whether per-browser can be made to "just
work" w/o this.

The only candidates we have for cross-browser communication fallback (and
incidentally, parent verifiability) are RMR and IFPC. RMR is preferable
since it doesn't require an active hosted relay. I haven't done too much
testing to see how RMR behaves on browsers other than Safari < 4 and Chrome
< 2 lately, but back when Joey and I were originally playing w/ the concept
we found two things:

1. It was slow, but no slower than IFPC (and usually faster). Slowness due
to browsers limiting how often onresize events fire.
2. It doesn't work for 404 pages on IE since IE's are "smart" 404s that
don't participate in normal browser security checks. I'm not sure how best
to get around this aside from having code that somehow searches for common
files to use as a relay: domain.com/robots.txt, domain.com/favicon.ico,
etc... or just require that the container (we can always host a 1-byte file
on a gadgets server) identify a reasonable file.

The current RMR implementation is conceptually solid in the face of varying
resize timeouts, since it employs ACKs and call ID windows. So I'm
cautiously optimistic, which is always dangerous with this lib.

--John

On Wed, Jun 10, 2009 at 6:07 PM, Brian Eaton <be...@google.com> wrote:

> So long as we are dealing with messiness, have you given much thought
> to wire compatibility?  What do we do if the container page identifies
> that RMR is the best transport, but the gadget doesn't support RMR
> (because it is using a different version of gadgets.rpc, for example.)
>
> On Wed, Jun 10, 2009 at 6:00 PM, John Hjelmstad<jo...@gmail.com>
> wrote:
> > I recognize there are other fruitful optimizations we can do, and plan on
> > tackling several more over time.
> >
> > With this idea, I'm aiming at reducing the container JS footprint. That
> the
> > gadget footprint would also be reduced is a bonus, but clearly we have a
> big
> > mess of JS there (~45kB of core alone, by a quick count) of which rpc's
> > fraction is relatively small. Opensocial-0.[8,9] is another ball of wax,
> but
> > remember Shindig renders non-OS gadgets as well.
> >
> > So why look at optimizing rpc: it's commonly included by containers, its
> use
> > is well-circumscribed (all the containers use is gadgets.rpc.* -- core
> > gadget JS optimization, to take one example, requires assurances that no
> > gadgets.util/config/json/log/prefs/io methods are ever executed by the
> > gadget without any declaration on its part), and the container page is
> > sacred ground for latency.
> >
> > Taking a look at the numbers... the optimizations apply to refactored
> rpc.js
> > including RMR.
> >
> > That clocks in at:
> >   557 wpm.transport.opt.js
> >  4466 rpc.opt.js
> >  2825 rmr.transport.opt.js
> >  1998 nix.transport.opt.js
> >  1292 ifpc.transport.opt.js
> >  1005 fe.transport.opt.js
> >  534 dpm.transport.opt.js
> >
> > Total: 12,677B, gzipped: 4,365
> >
> > Other iterations-
> > All minus IFPC: 11,385 raw, 3,948 gzip
> > WPM only: 5,023 raw, 2,051 gzip
> > DPM only: 5,000 raw, 2,043 gzip
> > NIX only: 6,464 raw, 2,550 gzip
> > FE only: 5,471 raw, 2,158 gzip
> > RMR only: 7,291 raw, 2,951 gzip
> >
> > As Balaji notes, 15% of users don't have gzip. So the savings are
> somewhere
> > in the ballpark of ~2kB gzip and 7kB raw. The heuristic conclusion of
> > several analyses I've read is that 3000 bytes = ~60ms added page latency.
> I
> > want to reduce that as much as possible.
> >
> > Re: browser passing along the filter. I've experimented a bit with this,
> and
> > it can indeed work but with a few pain points. Essentially, the technique
> > moves the getRelayChannel() method into a snippet of JS which constructs
> a
> > script URL eg:
> > <script>
> > document.write("<script src='
> >
> http://www.gadgetserver.com/gadgets/js/rpc.js?c=1&container=mycontainer&tx=
> "
> > + getRelayChannel() + "'></script>");
> > </script>
> >
> > Issues:
> > 1. Brittle: the implementation of getRelayChannel() needs to be updated
> for
> > every container that's hardcoded it if any changes are needed.
> >  - Would prefer to keep integration points tightly controlled.
> > 2. Hard-coded URL generation.
> >  - Mitigated by server generation of URL, which happens on many
> containers.
> > For those, we can implement this technique later, building it on this
> code.
> > 3. Forced additional script load.
> >  - Server-side support allows server-to-server retrieval of per-browser
> JS
> > with dynamic compilation inline into container JS to avoid additional
> HTTP
> > request.
> >
> > Because the work in this CL can trivially be utilized by this technique,
> I
> > thought it best to implement this first and wait until the rpc library
> > stabilizes for some time before introducing another integration method.
> >
> > I share your concern about IE UA faking. That's one reason this code is
> an
> > alternative but not a requirement. Two possible solutions: 1. In-browser
> > technique selection (as noted above), 2. For IE UAs that are faked (all?
> > Just IE6/7?), emit ALL_TX rather than NIX_TX.
> >
> > --John
> >
> > On Wed, Jun 10, 2009 at 10:45 AM, Kevin Brown <et...@google.com> wrote:
> >
> >> Getting rid of IFPC brings the gzipped size down to around 2k, which was
> my
> >> personal preference.
> >>
> >> Compared to the opensocial-0.8, or even 'core', though, rpc is pretty
> >> small. Those libraries weigh in at 30k and 10k, respectively.
> >>
> >>
> >> On Wed, Jun 10, 2009 at 7:06 AM, Paul Lindner <li...@inuus.com>
> wrote:
> >>
> >>> Of all the optimizations out there this seems like the least
> interesting
> >>> and
> >>> more disruptive than necessary.  The (current) whole of rpc.js is
> >>>  6285 Jun  8 13:28 ./features/target/classes/features/rpc/rpc.opt.js
> >>>
> >>> Making this 6k private instead of CDN cacheable would mean that I would
> >>> have
> >>> to find another 6k to move onto the CDN to insure that I (and others)
> hit
> >>> our cold-cache numbers for loading profile / home pages.
> >>>
> >>> Do you mind sharing your goals here?  What kind of byte-savings are you
> >>> expecting?
> >>>
> >>> Is it not possible to have the browser send a filtering parameter along
> >>> with
> >>> the request?  The getRelayChannel() function figures out your
> capabilities
> >>> based on what the browser can do, not the value of it's
> >>> UA.  For example there are some people that change their User-Agent to
> >>> (mostly) Internet Explorer to get around server restrictions.  These
> >>> people
> >>> would get broken nix.
> >>>
> >>> I just wonder if there are other ways to achieve the wanted byte
> savings?
> >>>
> >>>
> >>> On Tue, Jun 9, 2009 at 2:18 PM, John Hjelmstad <fa...@google.com>
> wrote:
> >>>
> >>> > I'm sold, relying on Cache-Control: private is the better option.
> >>> > This behavior does affect IFRAME rendering paths as well, though
> doesn't
> >>> > necessarily need to. Container-side rpc can be optimized without
> >>> > gadget-side
> >>> > rpc, since gadgets.rpc(function getRelayChannel()) will return the
> same
> >>> > value on each side.
> >>> >
> >>> > As Brian mentions, every code path in GadgetRenderingServlet sets
> >>> > Cache-Control: private[,*] (except nocache=1, not an issue), so the
> >>> > potential effects are limited to JsServlet.
> >>> >
> >>> > So 1. Admittedly, any given deployment will likely see higher
> JsServlet
> >>> > traffic if choosing to opt into this feature; 2. this suggests that
> >>> > JsLibrary field isBrowserSpecific should be added, with JsServlet
> >>> signaling
> >>> > noProxy behavior in caching headers when this bit is true for any
> >>> library
> >>> > it
> >>> > emits.
> >>> >
> >>> > John
> >>> >
> >>> > On Tue, Jun 9, 2009 at 1:53 PM, Brian Eaton <be...@google.com>
> wrote:
> >>> >
> >>> > > I don't trust the vary header any farther than I can spit.
> >>> > >
> >>> > > For example, at least one version of IE interprets a Vary header
> with
> >>> > > any value other than user-agent to mean "OMG, I don't understand,
> >>> > > don't cache this":
> >>> > >    http://marc.info/?l=apache-modgzip&m=103958533520502&w=2
> >>> > >
> >>> > > In a comment to Mark's blog post, he mentions other implementation
> >>> > issues:
> >>> > >
> >>> > > "More complex situations -- e.g., with multiple Vary headers and
> >>> > > multiple request headers -- caused problems on a number of
> >>> > > implementations (i.e., they'd return the wrong thing)."
> >>> > >
> >>> > > IMHO, we should stick to "cache-control: private."
> >>> > >
> >>> > > We need that on most iframes anyway, so this keeps the
> cache-control
> >>> > > changes from rippling out quite so far.
> >>> > >
> >>> > > On Tue, Jun 9, 2009 at 1:44 PM, John Panzer<jp...@google.com>
> >>> wrote:
> >>> > > > This seems relevant:
> >>> http://www.mnot.net/blog/2007/06/20/proxy_caching
> >>> > > > (It appears that Vary: is fairly safe in the sense that caches
> that
> >>> > don't
> >>> > > > deal with it nicely -- e.g., caching variants -- will just mark
> the
> >>> > > content
> >>> > > > as uncacheable.  Though I don't think this was rigorously
> tested.)
> >>> > > >
> >>> > > > --
> >>> > > > John Panzer / Blogger
> >>> > > > jpanzer@google.com / abstractioneer.org <
> >>> > http://www.abstractioneer.org/>
> >>> > > /
> >>> > > > @jpanzer
> >>> > > >
> >>> > > >
> >>> > > >
> >>> > > > On Tue, Jun 9, 2009 at 1:25 PM, <jo...@gmail.com> wrote:
> >>> > > >
> >>> > > >> Hi Paul:
> >>> > > >>
> >>> > > >> You're absolutely right. Vary: User-Agent; is preferable to me.
> Do
> >>> you
> >>> > > >> have any reason in mind for why Cache-Control: private in
> addition
> >>> to
> >>> > or
> >>> > > >> instead of Vary would be preferable? Ie. common support by
> various
> >>> > CDNs.
> >>> > > >>
> >>> > > >> I've been thinking how best to link service of custom rpc to
> >>> setting
> >>> > > >> this header. Ultimately JsServlet or a Filter needs to do so.
> >>> Thoughts
> >>> > > >> on this? The best idea I had was to add something like
> >>> > > >> isBrowserSpecific() to JsLibrary so that the relevant output
> code
> >>> > > >> (gadget rendering, js servlet) could act accordingly.
> >>> > > >>
> >>> > > >> Kevin - re: StringBuffer, comment isn't particularly
> prescriptive
> >>> so
> >>> > > >> I'll assume the complaint is use of StringBuffer rather than
> >>> > > >> StringBuilder. If so, fixed. If not, let me know.
> >>> > > >>
> >>> > > >>
> >>> > > >> http://codereview.appspot.com/63210
> >>> > > >>
> >>> > > >
> >>> > >
> >>> >
> >>>
> >>
> >>
> >
>

RE: [round trip compatibility] Re: rpc.js wire compatibility

Posted by "Weygandt, Jon" <jw...@ebay.com>.
Paul,

Any of your solution make it back to the code branch?

We have a similar clustered deployment, and would like to use what
exists or take part in creating and testing the solution.

How do you inform the container the version/generation number of the
server?

Should we start to introduce a version number to some of the features.
Fortunately it is only very view features that require this (rpc, pubsub
and core are the only ones with code spanning gadget and container).

Jon 

-----Original Message-----
From: Paul Lindner [mailto:lindner@inuus.com] 
Sent: Thursday, June 11, 2009 1:24 PM
To: shindig-dev@incubator.apache.org
Subject: Re: [round trip compatibility] Re: rpc.js wire compatibility

There's actually a larger problem here and it affects more than just the
rpc calls.
One problem area is upgrading a cluster of shindig machines to insure
that iframe content matches up with container js and forced-libs js
content.

If you upgrade the cluster in-place you'll end up with requests for
javascript going to the old build which is then cached by the browser.
This is especially problematic because shindig always responds
'not-modified' to an IMS request when a v= param is present.

If you have session affinity or other load balancer tricks you might not
have this problem, however with a CDN in place the request for the JS
content comes from the CDN host, not the user's browser.

The solution used at hi5 was to add a 'generation' param to all
versioned URLs.  To do the rollout you then did:

  Rolling upgrade of hosts from v to v+1
  Bump generation number
  Rolling restart of all hosts

One possible solution is to compare the v= param the browser sends with
the internal hash-code as calculated by the server.  If they mis-match
you can set low expiration value.  (Although this still doesn't help
with IMS
requests..)

The other idea is to have separate instances and change the parent path
on a per-build basis for all gadgets/js requests.

  /opensocial-v1/gadgets/js
  /opensocial-v2/gadgets/js

This has a nice side effect that you can deploy both versions
side-by-side and drain down old for new.



On Thu, Jun 11, 2009 at 1:07 PM, John Hjelmstad <fa...@google.com>
wrote:

> On Thu, Jun 11, 2009 at 1:46 AM, Kevin Brown <et...@google.com> wrote:
>
> > Realistically speaking, 'new' channels aren't going to be an issue. 
> > All
> new
> > browsers (and new browser versions) will use postMessage. We have a
> channel
> > that is 'fast enough' for all legacy browsers, and over time we will
> remove
> > libraries rather than add them.
> >
> > The reasons why we might add a new channel:
> >
> > 1. Some big security problem with an existing channel. Most likely 
> > we
> will
> > just switch back to IFPC for the browser(s) that are affected if 
> > this happens. IE 6 is really the only browser where this is a 
> > significant risk
> > --
> > all other browsers (including IE7) are on an auto update path that 
> > will make the other legacy channels irrelevant by the end of the 
> > year.
>
>
> Agreed.
>
>
> >
> >
> > 2. I can't think of any other good reason. Vanity?
>
>
> I shudder at the prospect of adding yet more rpc code for vanity :)
>
>
> >
> >
> > The real issue is going to be code compatibility itself. Your 
> > proposed solution wouldn't make any difference if the code isn't
compatible.
> >
> > I stand by what I've said for nearly 2 years on this issue, which i 
> > that the only viable option for the rpc feature is for containers to

> > source the
> file
> > directly from the gadget server. Every other approach has been full 
> > of compatibility bugs.
>
>
> +1, bottom line.
>
> --John
>
>
> >
> >
> > On Wed, Jun 10, 2009 at 10:03 PM, Brian Eaton <be...@google.com>
wrote:
> >
> > > On Wed, Jun 10, 2009 at 6:57 PM, John Hjelmstad<fa...@google.com>
> wrote:
> > > > I don't know of
> > > > any way that one transport would ever talk to another, so the 
> > > > best we
> > can
> > > do
> > > > in such failure cases is to fall back to some common transport 
> > > > that
> all
> > > > browsers support. So it's critically important that integrations
> happen
> > > > properly. It just doesn't work for containers to cache some 
> > > > stale old version of rpc.js if the library is changing.
> > >
> > > Hmm.  This feels wrong.  What if the container passed acceptable 
> > > transport types on the gadget render URL instead, then the gadget 
> > > picked from that list?
> > >
> > > That way there would be no problem if the container didn't support

> > > RMR, but the gadget did.
> > >
> >
>

Re: [round trip compatibility] Re: rpc.js wire compatibility

Posted by John Hjelmstad <fa...@google.com>.
On Thu, Jun 11, 2009 at 2:49 PM, Paul Lindner <li...@inuus.com> wrote:

> On Thu, Jun 11, 2009 at 1:35 PM, Kevin Brown <et...@google.com> wrote:
>
> > On Thu, Jun 11, 2009 at 1:24 PM, Paul Lindner <li...@inuus.com> wrote:
> >
> > > There's actually a larger problem here and it affects more than just
> the
> > > rpc
> > > calls.
> > > One problem area is upgrading a cluster of shindig machines to insure
> > that
> > > iframe content matches up with container js and forced-libs js content.
> > >
> > > If you upgrade the cluster in-place you'll end up with requests for
> > > javascript going to the old build which is then cached by the browser.
> > >  This
> > > is especially problematic because shindig always responds
> 'not-modified'
> > to
> > > an IMS request when a v= param is present.
> >
> >
> > We send not-modified when we get an If-Modified-Since, not when there's a
> v
> > param present.
> >
>
> Actually:
>
>    // If an If-Modified-Since header is ever provided, we always say
>    // not modified. This is because when there actually is a change,
>    // cache busting should occur.
>    if (req.getHeader("If-Modified-Since") != null &&
>        req.getParameter("v") != null) {
>      resp.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
>      return;
>     }
>
>
>
>
> >
> >
> > > If you have session affinity or other load balancer tricks you might
> not
> > > have this problem, however with a CDN in place the request for the JS
> > > content comes from the CDN host, not the user's browser.
> >
> >
> > If you're using a CDN, or a caching reverse proxy, it's best to have all
> > relevant versions available there for as long as is necessary.
> >
> > We use a caching reverse proxy, which ensures that the 'right' file gets
> > served 99.9% of the time.
> >
>
> Yes those caches get an amazing hit rate, but how do you insure that a user
> that has the following Iframe:
>
>
>
> http://lfkq9vbe9u4sg98ip8rfvf00l7atcn3d.ig.ig.sandbox.gmodules.com/gadgets/ifr?url=http://www.google.com/ig/modules/fv.xml&libs=core:core.io:core.iglegacy
>
>
> gets the correct JS content here:
>
>
> http://www.sandbox.gmodules.com/gadgets/js/core:core.iglegacy:core.io.js?v=7565e07cd2ecc6d7e363f7e55e79fbc&container=ig&debug=
> 0


Leaving aside the fact that rpc doesn't participate in this ;), the v= param
is computed as a hash of the JS to be emitted. If it (eg rpc) changes, v
changes. At that point you need some kind of frontend affinity to match
versions.

Still, that may not happen, and worse yet a CDN might pick up a stale
version of rpc with a (new v)=-param request.

The problem is that v= values are assumed to be properly generated and
consistently served. Rolling server startups (esp. w/o affinity, which IMO
is a high bar to demand of any Shindig deployment) introduce consistency
errors. We could consider v= verification to mitigate this, in JsServlet at
least to start. Efficiently computing v= for gadget IFRAMEs seems more
difficult.

--John


>
>
> when you're in the midst of upgr
>

Re: [round trip compatibility] Re: rpc.js wire compatibility

Posted by Brian Eaton <be...@google.com>.
On Thu, Jun 11, 2009 at 3:30 PM, Kevin Brown<et...@google.com> wrote:
> Currently, the latter is cached by our reverse proxy so the version
> requested always comes from there. You can still bump into problems if the
> 'new' version winds up getting pulled from an 'old' server, though. We can
> mitigate that by adding an actual check on the v param instead of just using
> it for cache busting. This works if you have a load balancer that allows you
> to bounce the request off of another server on failure, but I doubt many
> organizations have a setup like that.

Leaving aside caching questions for a minute, what about a viable
experiment framework for this code?

If we have wire compatibility (with protocol options chosen by the
container page), we can do experiments where we opt-in certain users
to transport changes.

Cheers,
Brian

Re: [round trip compatibility] Re: rpc.js wire compatibility

Posted by Kevin Brown <et...@google.com>.
On Thu, Jun 11, 2009 at 2:49 PM, Paul Lindner <li...@inuus.com> wrote:

> On Thu, Jun 11, 2009 at 1:35 PM, Kevin Brown <et...@google.com> wrote:
>
> > On Thu, Jun 11, 2009 at 1:24 PM, Paul Lindner <li...@inuus.com> wrote:
> >
> > > There's actually a larger problem here and it affects more than just
> the
> > > rpc
> > > calls.
> > > One problem area is upgrading a cluster of shindig machines to insure
> > that
> > > iframe content matches up with container js and forced-libs js content.
> > >
> > > If you upgrade the cluster in-place you'll end up with requests for
> > > javascript going to the old build which is then cached by the browser.
> > >  This
> > > is especially problematic because shindig always responds
> 'not-modified'
> > to
> > > an IMS request when a v= param is present.
> >
> >
> > We send not-modified when we get an If-Modified-Since, not when there's a
> v
> > param present.
> >
>
> Actually:
>
>    // If an If-Modified-Since header is ever provided, we always say
>    // not modified. This is because when there actually is a change,
>    // cache busting should occur.
>    if (req.getHeader("If-Modified-Since") != null &&
>        req.getParameter("v") != null) {
>      resp.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
>      return;
>     }
>
>
>
>
> >
> >
> > > If you have session affinity or other load balancer tricks you might
> not
> > > have this problem, however with a CDN in place the request for the JS
> > > content comes from the CDN host, not the user's browser.
> >
> >
> > If you're using a CDN, or a caching reverse proxy, it's best to have all
> > relevant versions available there for as long as is necessary.
> >
> > We use a caching reverse proxy, which ensures that the 'right' file gets
> > served 99.9% of the time.
> >
>
> Yes those caches get an amazing hit rate, but how do you insure that a user
> that has the following Iframe:
>
>
>
> http://lfkq9vbe9u4sg98ip8rfvf00l7atcn3d.ig.ig.sandbox.gmodules.com/gadgets/ifr?url=http://www.google.com/ig/modules/fv.xml&libs=core:core.io:core.iglegacy
>
>
> gets the correct JS content here:
>
>
> http://www.sandbox.gmodules.com/gadgets/js/core:core.iglegacy:core.io.js?v=7565e07cd2ecc6d7e363f7e55e79fbc&container=ig&debug=


Currently, the latter is cached by our reverse proxy so the version
requested always comes from there. You can still bump into problems if the
'new' version winds up getting pulled from an 'old' server, though. We can
mitigate that by adding an actual check on the v param instead of just using
it for cache busting. This works if you have a load balancer that allows you
to bounce the request off of another server on failure, but I doubt many
organizations have a setup like that.

If we wind up using Cache-Control: private, though, this doesn't work any
longer.


>
> 0
>
> when you're in the midst of upgr
>

Re: [round trip compatibility] Re: rpc.js wire compatibility

Posted by Paul Lindner <li...@inuus.com>.
On Thu, Jun 11, 2009 at 1:35 PM, Kevin Brown <et...@google.com> wrote:

> On Thu, Jun 11, 2009 at 1:24 PM, Paul Lindner <li...@inuus.com> wrote:
>
> > There's actually a larger problem here and it affects more than just the
> > rpc
> > calls.
> > One problem area is upgrading a cluster of shindig machines to insure
> that
> > iframe content matches up with container js and forced-libs js content.
> >
> > If you upgrade the cluster in-place you'll end up with requests for
> > javascript going to the old build which is then cached by the browser.
> >  This
> > is especially problematic because shindig always responds 'not-modified'
> to
> > an IMS request when a v= param is present.
>
>
> We send not-modified when we get an If-Modified-Since, not when there's a v
> param present.
>

Actually:

    // If an If-Modified-Since header is ever provided, we always say
    // not modified. This is because when there actually is a change,
    // cache busting should occur.
    if (req.getHeader("If-Modified-Since") != null &&
        req.getParameter("v") != null) {
      resp.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
      return;
    }




>
>
> > If you have session affinity or other load balancer tricks you might not
> > have this problem, however with a CDN in place the request for the JS
> > content comes from the CDN host, not the user's browser.
>
>
> If you're using a CDN, or a caching reverse proxy, it's best to have all
> relevant versions available there for as long as is necessary.
>
> We use a caching reverse proxy, which ensures that the 'right' file gets
> served 99.9% of the time.
>

Yes those caches get an amazing hit rate, but how do you insure that a user
that has the following Iframe:


http://lfkq9vbe9u4sg98ip8rfvf00l7atcn3d.ig.ig.sandbox.gmodules.com/gadgets/ifr?url=http://www.google.com/ig/modules/fv.xml&libs=core:core.io:core.iglegacy


gets the correct JS content here:

http://www.sandbox.gmodules.com/gadgets/js/core:core.iglegacy:core.io.js?v=7565e07cd2ecc6d7e363f7e55e79fbc&container=ig&debug=
0

when you're in the midst of upgr

Re: [round trip compatibility] Re: rpc.js wire compatibility

Posted by Kevin Brown <et...@google.com>.
On Thu, Jun 11, 2009 at 1:24 PM, Paul Lindner <li...@inuus.com> wrote:

> There's actually a larger problem here and it affects more than just the
> rpc
> calls.
> One problem area is upgrading a cluster of shindig machines to insure that
> iframe content matches up with container js and forced-libs js content.
>
> If you upgrade the cluster in-place you'll end up with requests for
> javascript going to the old build which is then cached by the browser.
>  This
> is especially problematic because shindig always responds 'not-modified' to
> an IMS request when a v= param is present.


We send not-modified when we get an If-Modified-Since, not when there's a v
param present.


> If you have session affinity or other load balancer tricks you might not
> have this problem, however with a CDN in place the request for the JS
> content comes from the CDN host, not the user's browser.


If you're using a CDN, or a caching reverse proxy, it's best to have all
relevant versions available there for as long as is necessary.

We use a caching reverse proxy, which ensures that the 'right' file gets
served 99.9% of the time.


>
>
> The solution used at hi5 was to add a 'generation' param to all versioned
> URLs.  To do the rollout you then did:
>
>  Rolling upgrade of hosts from v to v+1
>  Bump generation number
>  Rolling restart of all hosts
>
> One possible solution is to compare the v= param the browser sends with the
> internal hash-code as calculated by the server.  If they mis-match you can
> set low expiration value.  (Although this still doesn't help with IMS
> requests..)
>
> The other idea is to have separate instances and change the parent path on
> a
> per-build basis for all gadgets/js requests.
>
>  /opensocial-v1/gadgets/js
>  /opensocial-v2/gadgets/js
>
> This has a nice side effect that you can deploy both versions side-by-side
> and drain down old for new.
>
>
>
> On Thu, Jun 11, 2009 at 1:07 PM, John Hjelmstad <fa...@google.com> wrote:
>
> > On Thu, Jun 11, 2009 at 1:46 AM, Kevin Brown <et...@google.com> wrote:
> >
> > > Realistically speaking, 'new' channels aren't going to be an issue. All
> > new
> > > browsers (and new browser versions) will use postMessage. We have a
> > channel
> > > that is 'fast enough' for all legacy browsers, and over time we will
> > remove
> > > libraries rather than add them.
> > >
> > > The reasons why we might add a new channel:
> > >
> > > 1. Some big security problem with an existing channel. Most likely we
> > will
> > > just switch back to IFPC for the browser(s) that are affected if this
> > > happens. IE 6 is really the only browser where this is a significant
> risk
> > > --
> > > all other browsers (including IE7) are on an auto update path that will
> > > make
> > > the other legacy channels irrelevant by the end of the year.
> >
> >
> > Agreed.
> >
> >
> > >
> > >
> > > 2. I can't think of any other good reason. Vanity?
> >
> >
> > I shudder at the prospect of adding yet more rpc code for vanity :)
> >
> >
> > >
> > >
> > > The real issue is going to be code compatibility itself. Your proposed
> > > solution wouldn't make any difference if the code isn't compatible.
> > >
> > > I stand by what I've said for nearly 2 years on this issue, which i
> that
> > > the
> > > only viable option for the rpc feature is for containers to source the
> > file
> > > directly from the gadget server. Every other approach has been full of
> > > compatibility bugs.
> >
> >
> > +1, bottom line.
> >
> > --John
> >
> >
> > >
> > >
> > > On Wed, Jun 10, 2009 at 10:03 PM, Brian Eaton <be...@google.com>
> wrote:
> > >
> > > > On Wed, Jun 10, 2009 at 6:57 PM, John Hjelmstad<fa...@google.com>
> > wrote:
> > > > > I don't know of
> > > > > any way that one transport would ever talk to another, so the best
> we
> > > can
> > > > do
> > > > > in such failure cases is to fall back to some common transport that
> > all
> > > > > browsers support. So it's critically important that integrations
> > happen
> > > > > properly. It just doesn't work for containers to cache some stale
> old
> > > > > version of rpc.js if the library is changing.
> > > >
> > > > Hmm.  This feels wrong.  What if the container passed acceptable
> > > > transport types on the gadget render URL instead, then the gadget
> > > > picked from that list?
> > > >
> > > > That way there would be no problem if the container didn't support
> > > > RMR, but the gadget did.
> > > >
> > >
> >
>

Re: [round trip compatibility] Re: rpc.js wire compatibility

Posted by Paul Lindner <li...@inuus.com>.
There's actually a larger problem here and it affects more than just the rpc
calls.
One problem area is upgrading a cluster of shindig machines to insure that
iframe content matches up with container js and forced-libs js content.

If you upgrade the cluster in-place you'll end up with requests for
javascript going to the old build which is then cached by the browser.  This
is especially problematic because shindig always responds 'not-modified' to
an IMS request when a v= param is present.

If you have session affinity or other load balancer tricks you might not
have this problem, however with a CDN in place the request for the JS
content comes from the CDN host, not the user's browser.

The solution used at hi5 was to add a 'generation' param to all versioned
URLs.  To do the rollout you then did:

  Rolling upgrade of hosts from v to v+1
  Bump generation number
  Rolling restart of all hosts

One possible solution is to compare the v= param the browser sends with the
internal hash-code as calculated by the server.  If they mis-match you can
set low expiration value.  (Although this still doesn't help with IMS
requests..)

The other idea is to have separate instances and change the parent path on a
per-build basis for all gadgets/js requests.

  /opensocial-v1/gadgets/js
  /opensocial-v2/gadgets/js

This has a nice side effect that you can deploy both versions side-by-side
and drain down old for new.



On Thu, Jun 11, 2009 at 1:07 PM, John Hjelmstad <fa...@google.com> wrote:

> On Thu, Jun 11, 2009 at 1:46 AM, Kevin Brown <et...@google.com> wrote:
>
> > Realistically speaking, 'new' channels aren't going to be an issue. All
> new
> > browsers (and new browser versions) will use postMessage. We have a
> channel
> > that is 'fast enough' for all legacy browsers, and over time we will
> remove
> > libraries rather than add them.
> >
> > The reasons why we might add a new channel:
> >
> > 1. Some big security problem with an existing channel. Most likely we
> will
> > just switch back to IFPC for the browser(s) that are affected if this
> > happens. IE 6 is really the only browser where this is a significant risk
> > --
> > all other browsers (including IE7) are on an auto update path that will
> > make
> > the other legacy channels irrelevant by the end of the year.
>
>
> Agreed.
>
>
> >
> >
> > 2. I can't think of any other good reason. Vanity?
>
>
> I shudder at the prospect of adding yet more rpc code for vanity :)
>
>
> >
> >
> > The real issue is going to be code compatibility itself. Your proposed
> > solution wouldn't make any difference if the code isn't compatible.
> >
> > I stand by what I've said for nearly 2 years on this issue, which i that
> > the
> > only viable option for the rpc feature is for containers to source the
> file
> > directly from the gadget server. Every other approach has been full of
> > compatibility bugs.
>
>
> +1, bottom line.
>
> --John
>
>
> >
> >
> > On Wed, Jun 10, 2009 at 10:03 PM, Brian Eaton <be...@google.com> wrote:
> >
> > > On Wed, Jun 10, 2009 at 6:57 PM, John Hjelmstad<fa...@google.com>
> wrote:
> > > > I don't know of
> > > > any way that one transport would ever talk to another, so the best we
> > can
> > > do
> > > > in such failure cases is to fall back to some common transport that
> all
> > > > browsers support. So it's critically important that integrations
> happen
> > > > properly. It just doesn't work for containers to cache some stale old
> > > > version of rpc.js if the library is changing.
> > >
> > > Hmm.  This feels wrong.  What if the container passed acceptable
> > > transport types on the gadget render URL instead, then the gadget
> > > picked from that list?
> > >
> > > That way there would be no problem if the container didn't support
> > > RMR, but the gadget did.
> > >
> >
>

Re: rpc.js wire compatibility

Posted by John Hjelmstad <fa...@google.com>.
On Thu, Jun 11, 2009 at 1:46 AM, Kevin Brown <et...@google.com> wrote:

> Realistically speaking, 'new' channels aren't going to be an issue. All new
> browsers (and new browser versions) will use postMessage. We have a channel
> that is 'fast enough' for all legacy browsers, and over time we will remove
> libraries rather than add them.
>
> The reasons why we might add a new channel:
>
> 1. Some big security problem with an existing channel. Most likely we will
> just switch back to IFPC for the browser(s) that are affected if this
> happens. IE 6 is really the only browser where this is a significant risk
> --
> all other browsers (including IE7) are on an auto update path that will
> make
> the other legacy channels irrelevant by the end of the year.


Agreed.


>
>
> 2. I can't think of any other good reason. Vanity?


I shudder at the prospect of adding yet more rpc code for vanity :)


>
>
> The real issue is going to be code compatibility itself. Your proposed
> solution wouldn't make any difference if the code isn't compatible.
>
> I stand by what I've said for nearly 2 years on this issue, which i that
> the
> only viable option for the rpc feature is for containers to source the file
> directly from the gadget server. Every other approach has been full of
> compatibility bugs.


+1, bottom line.

--John


>
>
> On Wed, Jun 10, 2009 at 10:03 PM, Brian Eaton <be...@google.com> wrote:
>
> > On Wed, Jun 10, 2009 at 6:57 PM, John Hjelmstad<fa...@google.com> wrote:
> > > I don't know of
> > > any way that one transport would ever talk to another, so the best we
> can
> > do
> > > in such failure cases is to fall back to some common transport that all
> > > browsers support. So it's critically important that integrations happen
> > > properly. It just doesn't work for containers to cache some stale old
> > > version of rpc.js if the library is changing.
> >
> > Hmm.  This feels wrong.  What if the container passed acceptable
> > transport types on the gadget render URL instead, then the gadget
> > picked from that list?
> >
> > That way there would be no problem if the container didn't support
> > RMR, but the gadget did.
> >
>

Re: rpc.js wire compatibility

Posted by Kevin Brown <et...@google.com>.
Realistically speaking, 'new' channels aren't going to be an issue. All new
browsers (and new browser versions) will use postMessage. We have a channel
that is 'fast enough' for all legacy browsers, and over time we will remove
libraries rather than add them.

The reasons why we might add a new channel:

1. Some big security problem with an existing channel. Most likely we will
just switch back to IFPC for the browser(s) that are affected if this
happens. IE 6 is really the only browser where this is a significant risk --
all other browsers (including IE7) are on an auto update path that will make
the other legacy channels irrelevant by the end of the year.

2. I can't think of any other good reason. Vanity?

The real issue is going to be code compatibility itself. Your proposed
solution wouldn't make any difference if the code isn't compatible.

I stand by what I've said for nearly 2 years on this issue, which i that the
only viable option for the rpc feature is for containers to source the file
directly from the gadget server. Every other approach has been full of
compatibility bugs.

On Wed, Jun 10, 2009 at 10:03 PM, Brian Eaton <be...@google.com> wrote:

> On Wed, Jun 10, 2009 at 6:57 PM, John Hjelmstad<fa...@google.com> wrote:
> > I don't know of
> > any way that one transport would ever talk to another, so the best we can
> do
> > in such failure cases is to fall back to some common transport that all
> > browsers support. So it's critically important that integrations happen
> > properly. It just doesn't work for containers to cache some stale old
> > version of rpc.js if the library is changing.
>
> Hmm.  This feels wrong.  What if the container passed acceptable
> transport types on the gadget render URL instead, then the gadget
> picked from that list?
>
> That way there would be no problem if the container didn't support
> RMR, but the gadget did.
>

Re: rpc.js wire compatibility

Posted by Brian Eaton <be...@google.com>.
On Wed, Jun 10, 2009 at 6:57 PM, John Hjelmstad<fa...@google.com> wrote:
> I don't know of
> any way that one transport would ever talk to another, so the best we can do
> in such failure cases is to fall back to some common transport that all
> browsers support. So it's critically important that integrations happen
> properly. It just doesn't work for containers to cache some stale old
> version of rpc.js if the library is changing.

Hmm.  This feels wrong.  What if the container passed acceptable
transport types on the gadget render URL instead, then the gadget
picked from that list?

That way there would be no problem if the container didn't support
RMR, but the gadget did.