You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kudu.apache.org by Todd Lipcon <to...@cloudera.com> on 2017/01/25 20:37:55 UTC

TokenPB contents

Hey folks

I'm working on the token signing/verification stuff at the moment. Curious
to solicit some opinions on this:


message TokenPB {
  // The actual token contents. This is typically a serialized
  // protobuf of its own. However, we use a 'bytes' field, since
  // protobuf doesn't guarantee that if two implementations serialize
  // a protobuf, they'll necessary get bytewise identical results,
  // particularly in the presence of unknown fields.
  optional bytes token_contents = 1;

  // The cryptographic signature of 'token_contents'.
  optional bytes signature = 2;

  // The sequence number of the key which produced 'signature'.
  optional int64_t signing_key_seq_num = 3;
};

The thing that's currently missing is an expiration timestamp of the
signature. I have two options here:

*Option A*) say that the TokenPB itself doesn't capture expiration, and if
a particular type of token needs expiration, it would have to put an
'expiration time' in its token contents itself.

*pros:*
- token signing/verification is just a simple operation on the
'token_contents' string

*Cons:*
- would likely end up with redundant code between AuthN and AuthZ tokens,
both of which need expiration. However, that code isn't very complicated
(just a timestamp comparison) so maybe not a big deal?

*Option B)* add an expiration timestamp field to TokenPB
*pros:*
- consolidate the expiration checking code into TokenVerifier
*cons:*
- now in order to sign/verify a token, we actually need to be signing
something like a concatenation of 'token_contents + signature'. Not too
hard to construct this concatenation, but it does add some complexity.

Any strong opinions either way?

-Todd
-- 
Todd Lipcon
Software Engineer, Cloudera

Re: TokenPB contents

Posted by Todd Lipcon <to...@cloudera.com>.
Well, I discovered an issue with 'repeated Enum flags' in protobuf
(KUDU-1850). But I went with an appropriate workaround.

You can find the complete patch here: https://gerrit.cloudera.org/#/c/5796/

Please take a look.

-Todd

On Wed, Jan 25, 2017 at 3:35 PM, Dan Burkert <da...@apache.org> wrote:

> On Wed, Jan 25, 2017 at 3:33 PM, Alexey Serbin <as...@cloudera.com>
> wrote:
> >
> > Or it's proposed that if new
> > fields are found in a token then don't accept the token at all?  In that
> > case it might be an issue during rolling upgrade, I think.
> >
>
> That's what I was thinking, but that's a good reason not to do it that
> way.  Our normal required feature flags mechanism should work in that case,
> where the field is not critical/required.
>
>
> >
> >
> > Best regards,
> >
> > Alexey
> >
> > On Wed, Jan 25, 2017 at 3:14 PM, Dan Burkert <da...@apache.org>
> > wrote:
> >
> > > That's an interesting idea - say if the format evolved to have an
> > > additional field which restricts use of the token?  Could we use the
> > > protobuf unknown fields API to recognize if this happened?
> > >
> > > - Dan
> > >
> > > On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <as...@cloudera.com>
> > > wrote:
> > >
> > >> I like this idea.
> > >>
> > >> Probably, that's too early at this point, but consider adding notion
> of
> > >> compt/non-compat feature flags into tokens.  Imagine the new version
> of
> > >> token has some additional restriction field, which older code does not
> > >> understand.  In that case, even if the token signature is correct, the
> > >> older code should not accept the new token since unsupported
> non-compat
> > >> feature flags is present.
> > >>
> > >> But in the essence this looks great, IMO.
> > >>
> > >>
> > >> Best regards,
> > >>
> > >> Alexey
> > >>
> > >> On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com>
> > wrote:
> > >>
> > >>> Actually had one more idea... how about:
> > >>> message AuthenticationToken {
> > >>> };
> > >>>
> > >>> message AuthorizationToken {
> > >>> };
> > >>>
> > >>> message TokenPB {
> > >>>   // The time at which this token expires, in seconds since the
> > >>>   // unix epoch.
> > >>>   optional int64 expire_unix_epoch_seconds = 1;
> > >>>
> > >>>   oneof token {
> > >>>     AuthenticationToken authn = 2;
> > >>>     AuthenticationToken authz = 3;
> > >>>   }
> > >>> };
> > >>>
> > >>> message SignedTokenPB {
> > >>>   // The actual token contents. This is a serialized TokenPB
> protobuf.
> > >>> However, we use a
> > >>>   // 'bytes' field, since protobuf doesn't guarantee that if two
> > >>> implementations serialize
> > >>>   // a protobuf, they'll necessary get bytewise identical results,
> > >>> particularly in the
> > >>>   // presence of unknown fields.
> > >>>   optional bytes token_contents = 2;
> > >>>
> > >>>   // The cryptographic signature of 'token_contents'.
> > >>>   optional bytes signature = 3;
> > >>>
> > >>>   // The sequence number of the key which produced 'signature'.
> > >>>   optional int64 signing_key_seq_num = 4;
> > >>> };
> > >>>
> > >>>
> > >>>
> > >>> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com>
> > wrote:
> > >>>
> > >>>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com>
> > wrote:
> > >>>>
> > >>>>> I think it must go in the 'token_contents' itself, otherwise it can
> > be
> > >>>>> modified by a malicious client.  Other than that, looks good.
> > >>>>>
> > >>>>>
> > >>>> well, if it went into a separate field, then we'd have something
> like:
> > >>>>
> > >>>> optional bytes token_contents = 1;
> > >>>> optional int64 expiration_timestamp = 2;
> > >>>>
> > >>>> // Signature of the string: '<32-bit big-endian length of
> > >>>> token_contents> token_contents <64-bit big-endian expiration>'
> > >>>> optional bytes signature = 3;
> > >>>>
> > >>>> so they could try to modify it, but the signature would fail.
> > >>>>
> > >>>>
> > >>>>
> > >>>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com>
> > >>>>> wrote:
> > >>>>>
> > >>>>>> Hey folks
> > >>>>>>
> > >>>>>> I'm working on the token signing/verification stuff at the moment.
> > >>>>>> Curious to solicit some opinions on this:
> > >>>>>>
> > >>>>>>
> > >>>>>> message TokenPB {
> > >>>>>>   // The actual token contents. This is typically a serialized
> > >>>>>>   // protobuf of its own. However, we use a 'bytes' field, since
> > >>>>>>   // protobuf doesn't guarantee that if two implementations
> > serialize
> > >>>>>>   // a protobuf, they'll necessary get bytewise identical results,
> > >>>>>>   // particularly in the presence of unknown fields.
> > >>>>>>   optional bytes token_contents = 1;
> > >>>>>>
> > >>>>>>   // The cryptographic signature of 'token_contents'.
> > >>>>>>   optional bytes signature = 2;
> > >>>>>>
> > >>>>>>   // The sequence number of the key which produced 'signature'.
> > >>>>>>   optional int64_t signing_key_seq_num = 3;
> > >>>>>> };
> > >>>>>>
> > >>>>>> The thing that's currently missing is an expiration timestamp of
> the
> > >>>>>> signature. I have two options here:
> > >>>>>>
> > >>>>>> *Option A*) say that the TokenPB itself doesn't capture
> expiration,
> > >>>>>> and if a particular type of token needs expiration, it would have
> > to put an
> > >>>>>> 'expiration time' in its token contents itself.
> > >>>>>>
> > >>>>>> *pros:*
> > >>>>>> - token signing/verification is just a simple operation on the
> > >>>>>> 'token_contents' string
> > >>>>>>
> > >>>>>> *Cons:*
> > >>>>>> - would likely end up with redundant code between AuthN and AuthZ
> > >>>>>> tokens, both of which need expiration. However, that code isn't
> very
> > >>>>>> complicated (just a timestamp comparison) so maybe not a big deal?
> > >>>>>>
> > >>>>>> *Option B)* add an expiration timestamp field to TokenPB
> > >>>>>> *pros:*
> > >>>>>> - consolidate the expiration checking code into TokenVerifier
> > >>>>>> *cons:*
> > >>>>>> - now in order to sign/verify a token, we actually need to be
> > signing
> > >>>>>> something like a concatenation of 'token_contents + signature'.
> Not
> > too
> > >>>>>> hard to construct this concatenation, but it does add some
> > complexity.
> > >>>>>>
> > >>>>>> Any strong opinions either way?
> > >>>>>>
> > >>>>>> -Todd
> > >>>>>> --
> > >>>>>> Todd Lipcon
> > >>>>>> Software Engineer, Cloudera
> > >>>>>>
> > >>>>>
> > >>>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> Todd Lipcon
> > >>>> Software Engineer, Cloudera
> > >>>>
> > >>>
> > >>>
> > >>>
> > >>> --
> > >>> Todd Lipcon
> > >>> Software Engineer, Cloudera
> > >>>
> > >>
> > >>
> > >
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Re: TokenPB contents

Posted by Dan Burkert <da...@apache.org>.
On Wed, Jan 25, 2017 at 3:33 PM, Alexey Serbin <as...@cloudera.com> wrote:
>
> Or it's proposed that if new
> fields are found in a token then don't accept the token at all?  In that
> case it might be an issue during rolling upgrade, I think.
>

That's what I was thinking, but that's a good reason not to do it that
way.  Our normal required feature flags mechanism should work in that case,
where the field is not critical/required.


>
>
> Best regards,
>
> Alexey
>
> On Wed, Jan 25, 2017 at 3:14 PM, Dan Burkert <da...@apache.org>
> wrote:
>
> > That's an interesting idea - say if the format evolved to have an
> > additional field which restricts use of the token?  Could we use the
> > protobuf unknown fields API to recognize if this happened?
> >
> > - Dan
> >
> > On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <as...@cloudera.com>
> > wrote:
> >
> >> I like this idea.
> >>
> >> Probably, that's too early at this point, but consider adding notion of
> >> compt/non-compat feature flags into tokens.  Imagine the new version of
> >> token has some additional restriction field, which older code does not
> >> understand.  In that case, even if the token signature is correct, the
> >> older code should not accept the new token since unsupported non-compat
> >> feature flags is present.
> >>
> >> But in the essence this looks great, IMO.
> >>
> >>
> >> Best regards,
> >>
> >> Alexey
> >>
> >> On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com>
> wrote:
> >>
> >>> Actually had one more idea... how about:
> >>> message AuthenticationToken {
> >>> };
> >>>
> >>> message AuthorizationToken {
> >>> };
> >>>
> >>> message TokenPB {
> >>>   // The time at which this token expires, in seconds since the
> >>>   // unix epoch.
> >>>   optional int64 expire_unix_epoch_seconds = 1;
> >>>
> >>>   oneof token {
> >>>     AuthenticationToken authn = 2;
> >>>     AuthenticationToken authz = 3;
> >>>   }
> >>> };
> >>>
> >>> message SignedTokenPB {
> >>>   // The actual token contents. This is a serialized TokenPB protobuf.
> >>> However, we use a
> >>>   // 'bytes' field, since protobuf doesn't guarantee that if two
> >>> implementations serialize
> >>>   // a protobuf, they'll necessary get bytewise identical results,
> >>> particularly in the
> >>>   // presence of unknown fields.
> >>>   optional bytes token_contents = 2;
> >>>
> >>>   // The cryptographic signature of 'token_contents'.
> >>>   optional bytes signature = 3;
> >>>
> >>>   // The sequence number of the key which produced 'signature'.
> >>>   optional int64 signing_key_seq_num = 4;
> >>> };
> >>>
> >>>
> >>>
> >>> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com>
> wrote:
> >>>
> >>>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com>
> wrote:
> >>>>
> >>>>> I think it must go in the 'token_contents' itself, otherwise it can
> be
> >>>>> modified by a malicious client.  Other than that, looks good.
> >>>>>
> >>>>>
> >>>> well, if it went into a separate field, then we'd have something like:
> >>>>
> >>>> optional bytes token_contents = 1;
> >>>> optional int64 expiration_timestamp = 2;
> >>>>
> >>>> // Signature of the string: '<32-bit big-endian length of
> >>>> token_contents> token_contents <64-bit big-endian expiration>'
> >>>> optional bytes signature = 3;
> >>>>
> >>>> so they could try to modify it, but the signature would fail.
> >>>>
> >>>>
> >>>>
> >>>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com>
> >>>>> wrote:
> >>>>>
> >>>>>> Hey folks
> >>>>>>
> >>>>>> I'm working on the token signing/verification stuff at the moment.
> >>>>>> Curious to solicit some opinions on this:
> >>>>>>
> >>>>>>
> >>>>>> message TokenPB {
> >>>>>>   // The actual token contents. This is typically a serialized
> >>>>>>   // protobuf of its own. However, we use a 'bytes' field, since
> >>>>>>   // protobuf doesn't guarantee that if two implementations
> serialize
> >>>>>>   // a protobuf, they'll necessary get bytewise identical results,
> >>>>>>   // particularly in the presence of unknown fields.
> >>>>>>   optional bytes token_contents = 1;
> >>>>>>
> >>>>>>   // The cryptographic signature of 'token_contents'.
> >>>>>>   optional bytes signature = 2;
> >>>>>>
> >>>>>>   // The sequence number of the key which produced 'signature'.
> >>>>>>   optional int64_t signing_key_seq_num = 3;
> >>>>>> };
> >>>>>>
> >>>>>> The thing that's currently missing is an expiration timestamp of the
> >>>>>> signature. I have two options here:
> >>>>>>
> >>>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
> >>>>>> and if a particular type of token needs expiration, it would have
> to put an
> >>>>>> 'expiration time' in its token contents itself.
> >>>>>>
> >>>>>> *pros:*
> >>>>>> - token signing/verification is just a simple operation on the
> >>>>>> 'token_contents' string
> >>>>>>
> >>>>>> *Cons:*
> >>>>>> - would likely end up with redundant code between AuthN and AuthZ
> >>>>>> tokens, both of which need expiration. However, that code isn't very
> >>>>>> complicated (just a timestamp comparison) so maybe not a big deal?
> >>>>>>
> >>>>>> *Option B)* add an expiration timestamp field to TokenPB
> >>>>>> *pros:*
> >>>>>> - consolidate the expiration checking code into TokenVerifier
> >>>>>> *cons:*
> >>>>>> - now in order to sign/verify a token, we actually need to be
> signing
> >>>>>> something like a concatenation of 'token_contents + signature'. Not
> too
> >>>>>> hard to construct this concatenation, but it does add some
> complexity.
> >>>>>>
> >>>>>> Any strong opinions either way?
> >>>>>>
> >>>>>> -Todd
> >>>>>> --
> >>>>>> Todd Lipcon
> >>>>>> Software Engineer, Cloudera
> >>>>>>
> >>>>>
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> Todd Lipcon
> >>>> Software Engineer, Cloudera
> >>>>
> >>>
> >>>
> >>>
> >>> --
> >>> Todd Lipcon
> >>> Software Engineer, Cloudera
> >>>
> >>
> >>
> >
>

Re: TokenPB contents

Posted by Alexey Serbin <as...@cloudera.com>.
Probably we could, but I didn't make my research on that yet.  Will need to
tinker a bit with that to get better understanding.  At least I know that
in proto3 the unknown fields are no longer present when serializing
previously de-serialized message:

  https://developers.google.com/protocol-buffers/docs/proto3#updating

Also, I'm not sure whether it would be possible to differentiate between
compatible and non-compatible extensions.   Or it's proposed that if new
fields are found in a token then don't accept the token at all?  In that
case it might be an issue during rolling upgrade, I think.


Best regards,

Alexey

On Wed, Jan 25, 2017 at 3:14 PM, Dan Burkert <da...@apache.org> wrote:

> That's an interesting idea - say if the format evolved to have an
> additional field which restricts use of the token?  Could we use the
> protobuf unknown fields API to recognize if this happened?
>
> - Dan
>
> On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <as...@cloudera.com>
> wrote:
>
>> I like this idea.
>>
>> Probably, that's too early at this point, but consider adding notion of
>> compt/non-compat feature flags into tokens.  Imagine the new version of
>> token has some additional restriction field, which older code does not
>> understand.  In that case, even if the token signature is correct, the
>> older code should not accept the new token since unsupported non-compat
>> feature flags is present.
>>
>> But in the essence this looks great, IMO.
>>
>>
>> Best regards,
>>
>> Alexey
>>
>> On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>
>>> Actually had one more idea... how about:
>>> message AuthenticationToken {
>>> };
>>>
>>> message AuthorizationToken {
>>> };
>>>
>>> message TokenPB {
>>>   // The time at which this token expires, in seconds since the
>>>   // unix epoch.
>>>   optional int64 expire_unix_epoch_seconds = 1;
>>>
>>>   oneof token {
>>>     AuthenticationToken authn = 2;
>>>     AuthenticationToken authz = 3;
>>>   }
>>> };
>>>
>>> message SignedTokenPB {
>>>   // The actual token contents. This is a serialized TokenPB protobuf.
>>> However, we use a
>>>   // 'bytes' field, since protobuf doesn't guarantee that if two
>>> implementations serialize
>>>   // a protobuf, they'll necessary get bytewise identical results,
>>> particularly in the
>>>   // presence of unknown fields.
>>>   optional bytes token_contents = 2;
>>>
>>>   // The cryptographic signature of 'token_contents'.
>>>   optional bytes signature = 3;
>>>
>>>   // The sequence number of the key which produced 'signature'.
>>>   optional int64 signing_key_seq_num = 4;
>>> };
>>>
>>>
>>>
>>> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>>
>>>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com> wrote:
>>>>
>>>>> I think it must go in the 'token_contents' itself, otherwise it can be
>>>>> modified by a malicious client.  Other than that, looks good.
>>>>>
>>>>>
>>>> well, if it went into a separate field, then we'd have something like:
>>>>
>>>> optional bytes token_contents = 1;
>>>> optional int64 expiration_timestamp = 2;
>>>>
>>>> // Signature of the string: '<32-bit big-endian length of
>>>> token_contents> token_contents <64-bit big-endian expiration>'
>>>> optional bytes signature = 3;
>>>>
>>>> so they could try to modify it, but the signature would fail.
>>>>
>>>>
>>>>
>>>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com>
>>>>> wrote:
>>>>>
>>>>>> Hey folks
>>>>>>
>>>>>> I'm working on the token signing/verification stuff at the moment.
>>>>>> Curious to solicit some opinions on this:
>>>>>>
>>>>>>
>>>>>> message TokenPB {
>>>>>>   // The actual token contents. This is typically a serialized
>>>>>>   // protobuf of its own. However, we use a 'bytes' field, since
>>>>>>   // protobuf doesn't guarantee that if two implementations serialize
>>>>>>   // a protobuf, they'll necessary get bytewise identical results,
>>>>>>   // particularly in the presence of unknown fields.
>>>>>>   optional bytes token_contents = 1;
>>>>>>
>>>>>>   // The cryptographic signature of 'token_contents'.
>>>>>>   optional bytes signature = 2;
>>>>>>
>>>>>>   // The sequence number of the key which produced 'signature'.
>>>>>>   optional int64_t signing_key_seq_num = 3;
>>>>>> };
>>>>>>
>>>>>> The thing that's currently missing is an expiration timestamp of the
>>>>>> signature. I have two options here:
>>>>>>
>>>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
>>>>>> and if a particular type of token needs expiration, it would have to put an
>>>>>> 'expiration time' in its token contents itself.
>>>>>>
>>>>>> *pros:*
>>>>>> - token signing/verification is just a simple operation on the
>>>>>> 'token_contents' string
>>>>>>
>>>>>> *Cons:*
>>>>>> - would likely end up with redundant code between AuthN and AuthZ
>>>>>> tokens, both of which need expiration. However, that code isn't very
>>>>>> complicated (just a timestamp comparison) so maybe not a big deal?
>>>>>>
>>>>>> *Option B)* add an expiration timestamp field to TokenPB
>>>>>> *pros:*
>>>>>> - consolidate the expiration checking code into TokenVerifier
>>>>>> *cons:*
>>>>>> - now in order to sign/verify a token, we actually need to be signing
>>>>>> something like a concatenation of 'token_contents + signature'. Not too
>>>>>> hard to construct this concatenation, but it does add some complexity.
>>>>>>
>>>>>> Any strong opinions either way?
>>>>>>
>>>>>> -Todd
>>>>>> --
>>>>>> Todd Lipcon
>>>>>> Software Engineer, Cloudera
>>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Todd Lipcon
>>>> Software Engineer, Cloudera
>>>>
>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>>
>>
>>
>

Re: TokenPB contents

Posted by Todd Lipcon <to...@cloudera.com>.
Yea, that's a good idea. It's sort of like the 'critical' attribute in
X509. "If a reader sees an attribute listed as critical, and it doesn't
know what to make of it, don't accept the cert".

I'll add a repeated enum for this even though it will be empty for now.

On Wed, Jan 25, 2017 at 3:14 PM, Dan Burkert <da...@apache.org> wrote:

> That's an interesting idea - say if the format evolved to have an
> additional field which restricts use of the token?  Could we use the
> protobuf unknown fields API to recognize if this happened?
>
> - Dan
>
> On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <as...@cloudera.com>
> wrote:
>
> > I like this idea.
> >
> > Probably, that's too early at this point, but consider adding notion of
> > compt/non-compat feature flags into tokens.  Imagine the new version of
> > token has some additional restriction field, which older code does not
> > understand.  In that case, even if the token signature is correct, the
> > older code should not accept the new token since unsupported non-compat
> > feature flags is present.
> >
> > But in the essence this looks great, IMO.
> >
> >
> > Best regards,
> >
> > Alexey
> >
> > On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com> wrote:
> >
> >> Actually had one more idea... how about:
> >> message AuthenticationToken {
> >> };
> >>
> >> message AuthorizationToken {
> >> };
> >>
> >> message TokenPB {
> >>   // The time at which this token expires, in seconds since the
> >>   // unix epoch.
> >>   optional int64 expire_unix_epoch_seconds = 1;
> >>
> >>   oneof token {
> >>     AuthenticationToken authn = 2;
> >>     AuthenticationToken authz = 3;
> >>   }
> >> };
> >>
> >> message SignedTokenPB {
> >>   // The actual token contents. This is a serialized TokenPB protobuf.
> >> However, we use a
> >>   // 'bytes' field, since protobuf doesn't guarantee that if two
> >> implementations serialize
> >>   // a protobuf, they'll necessary get bytewise identical results,
> >> particularly in the
> >>   // presence of unknown fields.
> >>   optional bytes token_contents = 2;
> >>
> >>   // The cryptographic signature of 'token_contents'.
> >>   optional bytes signature = 3;
> >>
> >>   // The sequence number of the key which produced 'signature'.
> >>   optional int64 signing_key_seq_num = 4;
> >> };
> >>
> >>
> >>
> >> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com>
> wrote:
> >>
> >>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com>
> wrote:
> >>>
> >>>> I think it must go in the 'token_contents' itself, otherwise it can be
> >>>> modified by a malicious client.  Other than that, looks good.
> >>>>
> >>>>
> >>> well, if it went into a separate field, then we'd have something like:
> >>>
> >>> optional bytes token_contents = 1;
> >>> optional int64 expiration_timestamp = 2;
> >>>
> >>> // Signature of the string: '<32-bit big-endian length of
> >>> token_contents> token_contents <64-bit big-endian expiration>'
> >>> optional bytes signature = 3;
> >>>
> >>> so they could try to modify it, but the signature would fail.
> >>>
> >>>
> >>>
> >>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com>
> >>>> wrote:
> >>>>
> >>>>> Hey folks
> >>>>>
> >>>>> I'm working on the token signing/verification stuff at the moment.
> >>>>> Curious to solicit some opinions on this:
> >>>>>
> >>>>>
> >>>>> message TokenPB {
> >>>>>   // The actual token contents. This is typically a serialized
> >>>>>   // protobuf of its own. However, we use a 'bytes' field, since
> >>>>>   // protobuf doesn't guarantee that if two implementations serialize
> >>>>>   // a protobuf, they'll necessary get bytewise identical results,
> >>>>>   // particularly in the presence of unknown fields.
> >>>>>   optional bytes token_contents = 1;
> >>>>>
> >>>>>   // The cryptographic signature of 'token_contents'.
> >>>>>   optional bytes signature = 2;
> >>>>>
> >>>>>   // The sequence number of the key which produced 'signature'.
> >>>>>   optional int64_t signing_key_seq_num = 3;
> >>>>> };
> >>>>>
> >>>>> The thing that's currently missing is an expiration timestamp of the
> >>>>> signature. I have two options here:
> >>>>>
> >>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
> >>>>> and if a particular type of token needs expiration, it would have to
> put an
> >>>>> 'expiration time' in its token contents itself.
> >>>>>
> >>>>> *pros:*
> >>>>> - token signing/verification is just a simple operation on the
> >>>>> 'token_contents' string
> >>>>>
> >>>>> *Cons:*
> >>>>> - would likely end up with redundant code between AuthN and AuthZ
> >>>>> tokens, both of which need expiration. However, that code isn't very
> >>>>> complicated (just a timestamp comparison) so maybe not a big deal?
> >>>>>
> >>>>> *Option B)* add an expiration timestamp field to TokenPB
> >>>>> *pros:*
> >>>>> - consolidate the expiration checking code into TokenVerifier
> >>>>> *cons:*
> >>>>> - now in order to sign/verify a token, we actually need to be signing
> >>>>> something like a concatenation of 'token_contents + signature'. Not
> too
> >>>>> hard to construct this concatenation, but it does add some
> complexity.
> >>>>>
> >>>>> Any strong opinions either way?
> >>>>>
> >>>>> -Todd
> >>>>> --
> >>>>> Todd Lipcon
> >>>>> Software Engineer, Cloudera
> >>>>>
> >>>>
> >>>>
> >>>
> >>>
> >>> --
> >>> Todd Lipcon
> >>> Software Engineer, Cloudera
> >>>
> >>
> >>
> >>
> >> --
> >> Todd Lipcon
> >> Software Engineer, Cloudera
> >>
> >
> >
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Re: TokenPB contents

Posted by Dan Burkert <da...@apache.org>.
That's an interesting idea - say if the format evolved to have an
additional field which restricts use of the token?  Could we use the
protobuf unknown fields API to recognize if this happened?

- Dan

On Wed, Jan 25, 2017 at 3:03 PM, Alexey Serbin <as...@cloudera.com> wrote:

> I like this idea.
>
> Probably, that's too early at this point, but consider adding notion of
> compt/non-compat feature flags into tokens.  Imagine the new version of
> token has some additional restriction field, which older code does not
> understand.  In that case, even if the token signature is correct, the
> older code should not accept the new token since unsupported non-compat
> feature flags is present.
>
> But in the essence this looks great, IMO.
>
>
> Best regards,
>
> Alexey
>
> On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com> wrote:
>
>> Actually had one more idea... how about:
>> message AuthenticationToken {
>> };
>>
>> message AuthorizationToken {
>> };
>>
>> message TokenPB {
>>   // The time at which this token expires, in seconds since the
>>   // unix epoch.
>>   optional int64 expire_unix_epoch_seconds = 1;
>>
>>   oneof token {
>>     AuthenticationToken authn = 2;
>>     AuthenticationToken authz = 3;
>>   }
>> };
>>
>> message SignedTokenPB {
>>   // The actual token contents. This is a serialized TokenPB protobuf.
>> However, we use a
>>   // 'bytes' field, since protobuf doesn't guarantee that if two
>> implementations serialize
>>   // a protobuf, they'll necessary get bytewise identical results,
>> particularly in the
>>   // presence of unknown fields.
>>   optional bytes token_contents = 2;
>>
>>   // The cryptographic signature of 'token_contents'.
>>   optional bytes signature = 3;
>>
>>   // The sequence number of the key which produced 'signature'.
>>   optional int64 signing_key_seq_num = 4;
>> };
>>
>>
>>
>> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>
>>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com> wrote:
>>>
>>>> I think it must go in the 'token_contents' itself, otherwise it can be
>>>> modified by a malicious client.  Other than that, looks good.
>>>>
>>>>
>>> well, if it went into a separate field, then we'd have something like:
>>>
>>> optional bytes token_contents = 1;
>>> optional int64 expiration_timestamp = 2;
>>>
>>> // Signature of the string: '<32-bit big-endian length of
>>> token_contents> token_contents <64-bit big-endian expiration>'
>>> optional bytes signature = 3;
>>>
>>> so they could try to modify it, but the signature would fail.
>>>
>>>
>>>
>>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com>
>>>> wrote:
>>>>
>>>>> Hey folks
>>>>>
>>>>> I'm working on the token signing/verification stuff at the moment.
>>>>> Curious to solicit some opinions on this:
>>>>>
>>>>>
>>>>> message TokenPB {
>>>>>   // The actual token contents. This is typically a serialized
>>>>>   // protobuf of its own. However, we use a 'bytes' field, since
>>>>>   // protobuf doesn't guarantee that if two implementations serialize
>>>>>   // a protobuf, they'll necessary get bytewise identical results,
>>>>>   // particularly in the presence of unknown fields.
>>>>>   optional bytes token_contents = 1;
>>>>>
>>>>>   // The cryptographic signature of 'token_contents'.
>>>>>   optional bytes signature = 2;
>>>>>
>>>>>   // The sequence number of the key which produced 'signature'.
>>>>>   optional int64_t signing_key_seq_num = 3;
>>>>> };
>>>>>
>>>>> The thing that's currently missing is an expiration timestamp of the
>>>>> signature. I have two options here:
>>>>>
>>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
>>>>> and if a particular type of token needs expiration, it would have to put an
>>>>> 'expiration time' in its token contents itself.
>>>>>
>>>>> *pros:*
>>>>> - token signing/verification is just a simple operation on the
>>>>> 'token_contents' string
>>>>>
>>>>> *Cons:*
>>>>> - would likely end up with redundant code between AuthN and AuthZ
>>>>> tokens, both of which need expiration. However, that code isn't very
>>>>> complicated (just a timestamp comparison) so maybe not a big deal?
>>>>>
>>>>> *Option B)* add an expiration timestamp field to TokenPB
>>>>> *pros:*
>>>>> - consolidate the expiration checking code into TokenVerifier
>>>>> *cons:*
>>>>> - now in order to sign/verify a token, we actually need to be signing
>>>>> something like a concatenation of 'token_contents + signature'. Not too
>>>>> hard to construct this concatenation, but it does add some complexity.
>>>>>
>>>>> Any strong opinions either way?
>>>>>
>>>>> -Todd
>>>>> --
>>>>> Todd Lipcon
>>>>> Software Engineer, Cloudera
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>>
>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>

Re: TokenPB contents

Posted by Alexey Serbin <as...@cloudera.com>.
I like this idea.

Probably, that's too early at this point, but consider adding notion of
compt/non-compat feature flags into tokens.  Imagine the new version of
token has some additional restriction field, which older code does not
understand.  In that case, even if the token signature is correct, the
older code should not accept the new token since unsupported non-compat
feature flags is present.

But in the essence this looks great, IMO.


Best regards,

Alexey

On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com> wrote:

> Actually had one more idea... how about:
> message AuthenticationToken {
> };
>
> message AuthorizationToken {
> };
>
> message TokenPB {
>   // The time at which this token expires, in seconds since the
>   // unix epoch.
>   optional int64 expire_unix_epoch_seconds = 1;
>
>   oneof token {
>     AuthenticationToken authn = 2;
>     AuthenticationToken authz = 3;
>   }
> };
>
> message SignedTokenPB {
>   // The actual token contents. This is a serialized TokenPB protobuf.
> However, we use a
>   // 'bytes' field, since protobuf doesn't guarantee that if two
> implementations serialize
>   // a protobuf, they'll necessary get bytewise identical results,
> particularly in the
>   // presence of unknown fields.
>   optional bytes token_contents = 2;
>
>   // The cryptographic signature of 'token_contents'.
>   optional bytes signature = 3;
>
>   // The sequence number of the key which produced 'signature'.
>   optional int64 signing_key_seq_num = 4;
> };
>
>
>
> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com> wrote:
>
>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com> wrote:
>>
>>> I think it must go in the 'token_contents' itself, otherwise it can be
>>> modified by a malicious client.  Other than that, looks good.
>>>
>>>
>> well, if it went into a separate field, then we'd have something like:
>>
>> optional bytes token_contents = 1;
>> optional int64 expiration_timestamp = 2;
>>
>> // Signature of the string: '<32-bit big-endian length of token_contents>
>> token_contents <64-bit big-endian expiration>'
>> optional bytes signature = 3;
>>
>> so they could try to modify it, but the signature would fail.
>>
>>
>>
>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>>
>>>> Hey folks
>>>>
>>>> I'm working on the token signing/verification stuff at the moment.
>>>> Curious to solicit some opinions on this:
>>>>
>>>>
>>>> message TokenPB {
>>>>   // The actual token contents. This is typically a serialized
>>>>   // protobuf of its own. However, we use a 'bytes' field, since
>>>>   // protobuf doesn't guarantee that if two implementations serialize
>>>>   // a protobuf, they'll necessary get bytewise identical results,
>>>>   // particularly in the presence of unknown fields.
>>>>   optional bytes token_contents = 1;
>>>>
>>>>   // The cryptographic signature of 'token_contents'.
>>>>   optional bytes signature = 2;
>>>>
>>>>   // The sequence number of the key which produced 'signature'.
>>>>   optional int64_t signing_key_seq_num = 3;
>>>> };
>>>>
>>>> The thing that's currently missing is an expiration timestamp of the
>>>> signature. I have two options here:
>>>>
>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
>>>> and if a particular type of token needs expiration, it would have to put an
>>>> 'expiration time' in its token contents itself.
>>>>
>>>> *pros:*
>>>> - token signing/verification is just a simple operation on the
>>>> 'token_contents' string
>>>>
>>>> *Cons:*
>>>> - would likely end up with redundant code between AuthN and AuthZ
>>>> tokens, both of which need expiration. However, that code isn't very
>>>> complicated (just a timestamp comparison) so maybe not a big deal?
>>>>
>>>> *Option B)* add an expiration timestamp field to TokenPB
>>>> *pros:*
>>>> - consolidate the expiration checking code into TokenVerifier
>>>> *cons:*
>>>> - now in order to sign/verify a token, we actually need to be signing
>>>> something like a concatenation of 'token_contents + signature'. Not too
>>>> hard to construct this concatenation, but it does add some complexity.
>>>>
>>>> Any strong opinions either way?
>>>>
>>>> -Todd
>>>> --
>>>> Todd Lipcon
>>>> Software Engineer, Cloudera
>>>>
>>>
>>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>

Re: TokenPB contents

Posted by Dan Burkert <da...@apache.org>.
On Wed, Jan 25, 2017 at 12:52 PM, Todd Lipcon <to...@cloudera.com> wrote:

> Actually had one more idea... how about:
> message AuthenticationToken {
> };
>
> message AuthorizationToken {
> };
>
> message TokenPB {
>   // The time at which this token expires, in seconds since the
>   // unix epoch.
>   optional int64 expire_unix_epoch_seconds = 1;
>
>   oneof token {
>     AuthenticationToken authn = 2;
>     AuthenticationToken authz = 3;
>   }
> };
>
> message SignedTokenPB {
>   // The actual token contents. This is a serialized TokenPB protobuf.
> However, we use a
>   // 'bytes' field, since protobuf doesn't guarantee that if two
> implementations serialize
>   // a protobuf, they'll necessary get bytewise identical results,
> particularly in the
>   // presence of unknown fields.
>   optional bytes token_contents = 2;
>
>   // The cryptographic signature of 'token_contents'.
>   optional bytes signature = 3;
>
>   // The sequence number of the key which produced 'signature'.
>   optional int64 signing_key_seq_num = 4;
> };
>
>
This looks great to me.


>
>
> On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com> wrote:
>
>> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com> wrote:
>>
>>> I think it must go in the 'token_contents' itself, otherwise it can be
>>> modified by a malicious client.  Other than that, looks good.
>>>
>>>
>> well, if it went into a separate field, then we'd have something like:
>>
>> optional bytes token_contents = 1;
>> optional int64 expiration_timestamp = 2;
>>
>> // Signature of the string: '<32-bit big-endian length of token_contents>
>> token_contents <64-bit big-endian expiration>'
>> optional bytes signature = 3;
>>
>> so they could try to modify it, but the signature would fail.
>>
>>
>>
>>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>>
>>>> Hey folks
>>>>
>>>> I'm working on the token signing/verification stuff at the moment.
>>>> Curious to solicit some opinions on this:
>>>>
>>>>
>>>> message TokenPB {
>>>>   // The actual token contents. This is typically a serialized
>>>>   // protobuf of its own. However, we use a 'bytes' field, since
>>>>   // protobuf doesn't guarantee that if two implementations serialize
>>>>   // a protobuf, they'll necessary get bytewise identical results,
>>>>   // particularly in the presence of unknown fields.
>>>>   optional bytes token_contents = 1;
>>>>
>>>>   // The cryptographic signature of 'token_contents'.
>>>>   optional bytes signature = 2;
>>>>
>>>>   // The sequence number of the key which produced 'signature'.
>>>>   optional int64_t signing_key_seq_num = 3;
>>>> };
>>>>
>>>> The thing that's currently missing is an expiration timestamp of the
>>>> signature. I have two options here:
>>>>
>>>> *Option A*) say that the TokenPB itself doesn't capture expiration,
>>>> and if a particular type of token needs expiration, it would have to put an
>>>> 'expiration time' in its token contents itself.
>>>>
>>>> *pros:*
>>>> - token signing/verification is just a simple operation on the
>>>> 'token_contents' string
>>>>
>>>> *Cons:*
>>>> - would likely end up with redundant code between AuthN and AuthZ
>>>> tokens, both of which need expiration. However, that code isn't very
>>>> complicated (just a timestamp comparison) so maybe not a big deal?
>>>>
>>>> *Option B)* add an expiration timestamp field to TokenPB
>>>> *pros:*
>>>> - consolidate the expiration checking code into TokenVerifier
>>>> *cons:*
>>>> - now in order to sign/verify a token, we actually need to be signing
>>>> something like a concatenation of 'token_contents + signature'. Not too
>>>> hard to construct this concatenation, but it does add some complexity.
>>>>
>>>> Any strong opinions either way?
>>>>
>>>> -Todd
>>>> --
>>>> Todd Lipcon
>>>> Software Engineer, Cloudera
>>>>
>>>
>>>
>>
>>
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>

Re: TokenPB contents

Posted by Todd Lipcon <to...@cloudera.com>.
Actually had one more idea... how about:
message AuthenticationToken {
};

message AuthorizationToken {
};

message TokenPB {
  // The time at which this token expires, in seconds since the
  // unix epoch.
  optional int64 expire_unix_epoch_seconds = 1;

  oneof token {
    AuthenticationToken authn = 2;
    AuthenticationToken authz = 3;
  }
};

message SignedTokenPB {
  // The actual token contents. This is a serialized TokenPB protobuf.
However, we use a
  // 'bytes' field, since protobuf doesn't guarantee that if two
implementations serialize
  // a protobuf, they'll necessary get bytewise identical results,
particularly in the
  // presence of unknown fields.
  optional bytes token_contents = 2;

  // The cryptographic signature of 'token_contents'.
  optional bytes signature = 3;

  // The sequence number of the key which produced 'signature'.
  optional int64 signing_key_seq_num = 4;
};



On Wed, Jan 25, 2017 at 12:44 PM, Todd Lipcon <to...@cloudera.com> wrote:

> On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com> wrote:
>
>> I think it must go in the 'token_contents' itself, otherwise it can be
>> modified by a malicious client.  Other than that, looks good.
>>
>>
> well, if it went into a separate field, then we'd have something like:
>
> optional bytes token_contents = 1;
> optional int64 expiration_timestamp = 2;
>
> // Signature of the string: '<32-bit big-endian length of token_contents>
> token_contents <64-bit big-endian expiration>'
> optional bytes signature = 3;
>
> so they could try to modify it, but the signature would fail.
>
>
>
>> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com> wrote:
>>
>>> Hey folks
>>>
>>> I'm working on the token signing/verification stuff at the moment.
>>> Curious to solicit some opinions on this:
>>>
>>>
>>> message TokenPB {
>>>   // The actual token contents. This is typically a serialized
>>>   // protobuf of its own. However, we use a 'bytes' field, since
>>>   // protobuf doesn't guarantee that if two implementations serialize
>>>   // a protobuf, they'll necessary get bytewise identical results,
>>>   // particularly in the presence of unknown fields.
>>>   optional bytes token_contents = 1;
>>>
>>>   // The cryptographic signature of 'token_contents'.
>>>   optional bytes signature = 2;
>>>
>>>   // The sequence number of the key which produced 'signature'.
>>>   optional int64_t signing_key_seq_num = 3;
>>> };
>>>
>>> The thing that's currently missing is an expiration timestamp of the
>>> signature. I have two options here:
>>>
>>> *Option A*) say that the TokenPB itself doesn't capture expiration, and
>>> if a particular type of token needs expiration, it would have to put an
>>> 'expiration time' in its token contents itself.
>>>
>>> *pros:*
>>> - token signing/verification is just a simple operation on the
>>> 'token_contents' string
>>>
>>> *Cons:*
>>> - would likely end up with redundant code between AuthN and AuthZ
>>> tokens, both of which need expiration. However, that code isn't very
>>> complicated (just a timestamp comparison) so maybe not a big deal?
>>>
>>> *Option B)* add an expiration timestamp field to TokenPB
>>> *pros:*
>>> - consolidate the expiration checking code into TokenVerifier
>>> *cons:*
>>> - now in order to sign/verify a token, we actually need to be signing
>>> something like a concatenation of 'token_contents + signature'. Not too
>>> hard to construct this concatenation, but it does add some complexity.
>>>
>>> Any strong opinions either way?
>>>
>>> -Todd
>>> --
>>> Todd Lipcon
>>> Software Engineer, Cloudera
>>>
>>
>>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>



-- 
Todd Lipcon
Software Engineer, Cloudera

Re: TokenPB contents

Posted by Todd Lipcon <to...@cloudera.com>.
On Wed, Jan 25, 2017 at 12:40 PM, Dan Burkert <da...@cloudera.com> wrote:

> I think it must go in the 'token_contents' itself, otherwise it can be
> modified by a malicious client.  Other than that, looks good.
>
>
well, if it went into a separate field, then we'd have something like:

optional bytes token_contents = 1;
optional int64 expiration_timestamp = 2;

// Signature of the string: '<32-bit big-endian length of token_contents>
token_contents <64-bit big-endian expiration>'
optional bytes signature = 3;

so they could try to modify it, but the signature would fail.



> On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com> wrote:
>
>> Hey folks
>>
>> I'm working on the token signing/verification stuff at the moment.
>> Curious to solicit some opinions on this:
>>
>>
>> message TokenPB {
>>   // The actual token contents. This is typically a serialized
>>   // protobuf of its own. However, we use a 'bytes' field, since
>>   // protobuf doesn't guarantee that if two implementations serialize
>>   // a protobuf, they'll necessary get bytewise identical results,
>>   // particularly in the presence of unknown fields.
>>   optional bytes token_contents = 1;
>>
>>   // The cryptographic signature of 'token_contents'.
>>   optional bytes signature = 2;
>>
>>   // The sequence number of the key which produced 'signature'.
>>   optional int64_t signing_key_seq_num = 3;
>> };
>>
>> The thing that's currently missing is an expiration timestamp of the
>> signature. I have two options here:
>>
>> *Option A*) say that the TokenPB itself doesn't capture expiration, and
>> if a particular type of token needs expiration, it would have to put an
>> 'expiration time' in its token contents itself.
>>
>> *pros:*
>> - token signing/verification is just a simple operation on the
>> 'token_contents' string
>>
>> *Cons:*
>> - would likely end up with redundant code between AuthN and AuthZ tokens,
>> both of which need expiration. However, that code isn't very complicated
>> (just a timestamp comparison) so maybe not a big deal?
>>
>> *Option B)* add an expiration timestamp field to TokenPB
>> *pros:*
>> - consolidate the expiration checking code into TokenVerifier
>> *cons:*
>> - now in order to sign/verify a token, we actually need to be signing
>> something like a concatenation of 'token_contents + signature'. Not too
>> hard to construct this concatenation, but it does add some complexity.
>>
>> Any strong opinions either way?
>>
>> -Todd
>> --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>


-- 
Todd Lipcon
Software Engineer, Cloudera

Re: TokenPB contents

Posted by Dan Burkert <da...@cloudera.com>.
I think it must go in the 'token_contents' itself, otherwise it can be
modified by a malicious client.  Other than that, looks good.

- Dan

On Wed, Jan 25, 2017 at 12:37 PM, Todd Lipcon <to...@cloudera.com> wrote:

> Hey folks
>
> I'm working on the token signing/verification stuff at the moment. Curious
> to solicit some opinions on this:
>
>
> message TokenPB {
>   // The actual token contents. This is typically a serialized
>   // protobuf of its own. However, we use a 'bytes' field, since
>   // protobuf doesn't guarantee that if two implementations serialize
>   // a protobuf, they'll necessary get bytewise identical results,
>   // particularly in the presence of unknown fields.
>   optional bytes token_contents = 1;
>
>   // The cryptographic signature of 'token_contents'.
>   optional bytes signature = 2;
>
>   // The sequence number of the key which produced 'signature'.
>   optional int64_t signing_key_seq_num = 3;
> };
>
> The thing that's currently missing is an expiration timestamp of the
> signature. I have two options here:
>
> *Option A*) say that the TokenPB itself doesn't capture expiration, and
> if a particular type of token needs expiration, it would have to put an
> 'expiration time' in its token contents itself.
>
> *pros:*
> - token signing/verification is just a simple operation on the
> 'token_contents' string
>
> *Cons:*
> - would likely end up with redundant code between AuthN and AuthZ tokens,
> both of which need expiration. However, that code isn't very complicated
> (just a timestamp comparison) so maybe not a big deal?
>
> *Option B)* add an expiration timestamp field to TokenPB
> *pros:*
> - consolidate the expiration checking code into TokenVerifier
> *cons:*
> - now in order to sign/verify a token, we actually need to be signing
> something like a concatenation of 'token_contents + signature'. Not too
> hard to construct this concatenation, but it does add some complexity.
>
> Any strong opinions either way?
>
> -Todd
> --
> Todd Lipcon
> Software Engineer, Cloudera
>