You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@ozone.apache.org by "Elek, Marton" <el...@apache.org> on 2020/02/28 13:47:22 UTC

[design] S3 access key management

Hi,

We had multiple discussions earlier about simplifying s3_bucket -> 
ozone_volume/ozone_bucket mapping (or at least make it more opaque)

I wrote a formal proposal about a very lightweight change:

https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view

Please let me know what do you think...


Thanks a lot,
Marton

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.
 >   How do I comment on the document? Do I need Hackmd account?


You can sign in with Github, Facebook, Twitter, Dropbox or Google account.

But I modified the permission to make it possible to comment it without 
login.

Or you can simply add your comment to this thread (Without the location 
context, I know...)

Marton


On 2/28/20 8:42 PM, Jitendra Pandey wrote:
> Thanks for starting this thread Marton. We need to address this for better
> usability.
>   How do I comment on the document? Do I need Hackmd account?
> 
> On Fri, Feb 28, 2020 at 5:47 AM Elek, Marton <el...@apache.org> wrote:
> 
>>
>> Hi,
>>
>> We had multiple discussions earlier about simplifying s3_bucket ->
>> ozone_volume/ozone_bucket mapping (or at least make it more opaque)
>>
>> I wrote a formal proposal about a very lightweight change:
>>
>> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view
>>
>> Please let me know what do you think...
>>
>>
>> Thanks a lot,
>> Marton
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>>
>>
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.
Thanks all the feedback.

Final version is uploaded as a pull request (will be part of the 
documentation if merged)

https://github.com/apache/hadoop-ozone/pull/756/files?short_path=44ad58e#diff-44ad58ec3778726a6b5a60e01642e088


Will be merged if no more concern.


Latest addition:

  * There was a question about locking during the community sync. It 
doesn't seem to be a big problem as for the bind-mounted volumes the 
lock of the referenced volumes will be hold most of the time.
  * + it's a read lock and the write lock is required only for quota / 
owner change which is infrequent (IMHO)



Marton



On 3/20/20 2:42 PM, Elek, Marton wrote:
> 
> Based on feedback and comments I updated the document.
> 
> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg
> 
> The current proposal is the following:
> 
> 1. Use one (configured) volume for all the s3 buckets.
> 
> For example if this configured volume is /s3, you can see all the Ozone 
> buckets in this volume via the S3 interface
> 
> (s3 bucket "bucket1" will be mapped to Ozone path "/s3/bucket1"
> 
> 2. To make it possible to access ANY volume/buckets, any Ozone volume 
> can be "bind mounted" to other volumes.
> 
> For example:
> 
> ozone sh mount /vol1/bucket1 /s3/bucket1
> 
> will create a symbolic-link like bind mounting, and inside /s3/bucket1 
> the content of /vol1/bucket1 will be shown. Together with the 1st point 
> (any buckets under s3 is exposed) this make it possible to expose any 
> buckets.
> 
> !!! INCOMPATIBLE CHANGE ALERT !!!!!
> 
> When 1 will be implemented, but 2, not yet. For a limited time of period 
> we will share buckets only from one volume as s3 buckets. This is 
> different from the current implementation when you can use s3buckets 
> from multiple volumes.
> 
> If it's a blocker for you, please share your opinion and we can schedule 
> the implementation according to the feedback.
> 
> 
> Thanks all the feedback and comments,
> Marton
> 
> 
> 
> 
> On 3/17/20 9:43 AM, Elek, Marton wrote:
>>
>>
>> On 3/16/20 3:11 PM, Arpit Agarwal wrote:
>>> Thanks for writing this up Marton. I updated the doc to add a fourth 
>>> problem:
>>>
>>>       > Ozone buckets created via the native object store interface 
>>> are not visible via the S3 gateway.
>>
>>> I don’t understand option 1. Does it mean that we will have at least 
>>> one volume per user? 
>>
>> No, you can use the same value:
>>
>> kinit user1 -kt ....
>> ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
>> s3 create-bucket ....
>>
>> kinit user2 -kt ....
>> ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
>> s3 create-bucket ....
>>
>>
>>> Also the access key is separate per user - so how do I grant another 
>>> user access to my volumes?
>>
>> See the previous example. If you have permission to the volume you can 
>> create an ACCESS_KEY_ID to get an s3 view of the volume.
>>
>>>
>>> I like option 2. The notion of volumes already doesn’t work in the S3 
>>> world. We also need to fix enumeration of volumes by users, this is 
>>> not an S3 issue.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.
Based on feedback and comments I updated the document.

https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg

The current proposal is the following:

1. Use one (configured) volume for all the s3 buckets.

For example if this configured volume is /s3, you can see all the Ozone 
buckets in this volume via the S3 interface

(s3 bucket "bucket1" will be mapped to Ozone path "/s3/bucket1"

2. To make it possible to access ANY volume/buckets, any Ozone volume 
can be "bind mounted" to other volumes.

For example:

ozone sh mount /vol1/bucket1 /s3/bucket1

will create a symbolic-link like bind mounting, and inside /s3/bucket1 
the content of /vol1/bucket1 will be shown. Together with the 1st point 
(any buckets under s3 is exposed) this make it possible to expose any 
buckets.

!!! INCOMPATIBLE CHANGE ALERT !!!!!

When 1 will be implemented, but 2, not yet. For a limited time of period 
we will share buckets only from one volume as s3 buckets. This is 
different from the current implementation when you can use s3buckets 
from multiple volumes.

If it's a blocker for you, please share your opinion and we can schedule 
the implementation according to the feedback.


Thanks all the feedback and comments,
Marton




On 3/17/20 9:43 AM, Elek, Marton wrote:
> 
> 
> On 3/16/20 3:11 PM, Arpit Agarwal wrote:
>> Thanks for writing this up Marton. I updated the doc to add a fourth 
>> problem:
>>
>>       > Ozone buckets created via the native object store interface 
>> are not visible via the S3 gateway.
> 
>> I don’t understand option 1. Does it mean that we will have at least 
>> one volume per user? 
> 
> No, you can use the same value:
> 
> kinit user1 -kt ....
> ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
> s3 create-bucket ....
> 
> kinit user2 -kt ....
> ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
> s3 create-bucket ....
> 
> 
>> Also the access key is separate per user - so how do I grant another 
>> user access to my volumes?
> 
> See the previous example. If you have permission to the volume you can 
> create an ACCESS_KEY_ID to get an s3 view of the volume.
> 
>>
>> I like option 2. The notion of volumes already doesn’t work in the S3 
>> world. We also need to fix enumeration of volumes by users, this is 
>> not an S3 issue.
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.

On 3/16/20 3:11 PM, Arpit Agarwal wrote:
> Thanks for writing this up Marton. I updated the doc to add a fourth problem:
> 
>   	> Ozone buckets created via the native object store interface are not visible via the S3 gateway.

> I don’t understand option 1. Does it mean that we will have at least one volume per user? 

No, you can use the same value:

kinit user1 -kt ....
ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
s3 create-bucket ....

kinit user2 -kt ....
ACCESS_KEY_ID=$(ozone s3 create-secret --volume=vol1)
s3 create-bucket ....


> Also the access key is separate per user - so how do I grant another user access to my volumes?

See the previous example. If you have permission to the volume you can 
create an ACCESS_KEY_ID to get an s3 view of the volume.

> 
> I like option 2. The notion of volumes already doesn’t work in the S3 world. We also need to fix enumeration of volumes by users, this is not an S3 issue.

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by Arpit Agarwal <aa...@cloudera.com.INVALID>.
Thanks for writing this up Marton. I updated the doc to add a fourth problem:

 	> Ozone buckets created via the native object store interface are not visible via the S3 gateway.

I don’t understand option 1. Does it mean that we will have at least one volume per user? Also the access key is separate per user - so how do I grant another user access to my volumes? 

I like option 2. The notion of volumes already doesn’t work in the S3 world. We also need to fix enumeration of volumes by users, this is not an S3 issue.

Thanks,
Arpit


> On Mar 16, 2020, at 1:32 AM, Elek, Marton <el...@apache.org> wrote:
> 
> 
> There are 4 options remains:
> 
> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?both
> 
> I think 2 (use the same volume for all the s3 buckets) and 3 (use volume-bucket format for s3 bucket names) are both very limited. 3 means that we remove the support of volumes for the whole s3 world, half part of the Ozone.
> 
> The remaining ones: 1 and 4
> 
> 4: remove the volume from the path of o3fs/ofs and use the same url for both S3 and Hadoop File system. But keep the volume functionality (use it as an administrative group).
> 
> I know it makes harder to disjoint namespaces but it 1). I don't know what is the exact plan about that one 2). It seems to be possible with caching volume -> bucket information on the clinet side
> 
> 1: restrict the view of the buckets to one volume per AWS_SECRET_ACCESS_KEY. Slightly more clear, but still has some confusion as you might see two different content with the same bucket name but different secret.
> 
> I prefer the solution 4 and 1 is my second preference.
> 
> Is there any more arguments / concerns about the proposed solution (against 4 or 1)
> 
> Thanks,
> Marton
> 
> 
> 
> 
> 
> On 3/3/20 6:08 PM, Elek, Marton wrote:
>>> This is not true, as we use only use access key id during the creation of
>>> s3 bucket to generate volume name from accessKeyID.
>>> For other requests like create/list/read key, the flow is
>>> 1. GetVolumeName(Bucket)
>>> 2.GetVolume(GetVolumeName(Bucket))
>>> 3.GetBucket
>> Got it, thanks to explain it. It's more confusing than I thought :-( Even if a volume is defined for an ACCESS_KEY_ID, it's not guaranteed to be the volume for a specific bucket (as it's used only to __create__ the buckets).
>> What do you think about defining the volume name for the ACCESS_KEY_ID as the current namespace/context. Remove the mapping table at all and always use the actual volume during **any* operation? (similar to the kubernetes current context...)
>> I rewrote the document and added 4 options (unfortunately all of them are painful...), including this one.
>> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?both
>> Can you please add if you have more (and add pro/con to any of them if you see...)
>> Thanks a lot,
>> Marton
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.
There are 4 options remains:

https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?both

I think 2 (use the same volume for all the s3 buckets) and 3 (use 
volume-bucket format for s3 bucket names) are both very limited. 3 means 
that we remove the support of volumes for the whole s3 world, half part 
of the Ozone.

The remaining ones: 1 and 4

4: remove the volume from the path of o3fs/ofs and use the same url for 
both S3 and Hadoop File system. But keep the volume functionality (use 
it as an administrative group).

I know it makes harder to disjoint namespaces but it 1). I don't know 
what is the exact plan about that one 2). It seems to be possible with 
caching volume -> bucket information on the clinet side

1: restrict the view of the buckets to one volume per 
AWS_SECRET_ACCESS_KEY. Slightly more clear, but still has some confusion 
as you might see two different content with the same bucket name but 
different secret.

I prefer the solution 4 and 1 is my second preference.

Is there any more arguments / concerns about the proposed solution 
(against 4 or 1)

Thanks,
Marton





On 3/3/20 6:08 PM, Elek, Marton wrote:
> 
>> This is not true, as we use only use access key id during the creation of
>> s3 bucket to generate volume name from accessKeyID.
>> For other requests like create/list/read key, the flow is
>> 1. GetVolumeName(Bucket)
>> 2.GetVolume(GetVolumeName(Bucket))
>> 3.GetBucket
> 
> 
> Got it, thanks to explain it. It's more confusing than I thought :-( 
> Even if a volume is defined for an ACCESS_KEY_ID, it's not guaranteed to 
> be the volume for a specific bucket (as it's used only to __create__ the 
> buckets).
> 
> What do you think about defining the volume name for the ACCESS_KEY_ID 
> as the current namespace/context. Remove the mapping table at all and 
> always use the actual volume during **any* operation? (similar to the 
> kubernetes current context...)
> 
> I rewrote the document and added 4 options (unfortunately all of them 
> are painful...), including this one.
> 
> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?both
> 
> Can you please add if you have more (and add pro/con to any of them if 
> you see...)
> 
> Thanks a lot,
> Marton
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.
> This is not true, as we use only use access key id during the creation of
> s3 bucket to generate volume name from accessKeyID.
> For other requests like create/list/read key, the flow is
> 1. GetVolumeName(Bucket)
> 2.GetVolume(GetVolumeName(Bucket))
> 3.GetBucket


Got it, thanks to explain it. It's more confusing than I thought :-( 
Even if a volume is defined for an ACCESS_KEY_ID, it's not guaranteed to 
be the volume for a specific bucket (as it's used only to __create__ the 
buckets).

What do you think about defining the volume name for the ACCESS_KEY_ID 
as the current namespace/context. Remove the mapping table at all and 
always use the actual volume during **any* operation? (similar to the 
kubernetes current context...)

I rewrote the document and added 4 options (unfortunately all of them 
are painful...), including this one.

https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?both

Can you please add if you have more (and add pro/con to any of them if 
you see...)

Thanks a lot,
Marton

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by Bharat Viswanadham <bv...@cloudera.com.INVALID>.
>
> I would prefer to use a map instead of the hard coded rule between the
> access_key_id -> volume. If you use any user based transformation rules
> (with or without the md5hex) you can't share the same s3 bucket between
> two users.
> Let's say you have one user who writes to an s3 bucket and an other user
> reads it. If the volume name is generated from the user name they
> couldn't use the same bucket.


This is not true, as we use only use access key id during the creation of
s3 bucket to generate volume name from accessKeyID.
For other requests like create/list/read key, the flow is
1. GetVolumeName(Bucket)
2.GetVolume(GetVolumeName(Bucket))
3.GetBucket

And also I have tried to change the s3 credentials and was able to write to
a bucket and do a list operation. (Tested this on a non-secure cluster)



Thanks,
Bharat



On Mon, Mar 2, 2020 at 12:44 AM Elek, Marton <el...@apache.org> wrote:

>
> Thank you the feedback, Bharat.
>
>  > One approach I can think of is, we can move the volume generation also
> to
>  > secret generation and print the volume name for that user along with
>  > accessKey and access Secret, in this way volume name will be known to
> the
>  > user during secret generation. (but only issue is this command needs
> to be
>  > supported in non-secure also) This can be done with very less code
> changes.
>
> Yes, I agree. We can print out the used volume during the secret creation.
>
>
> But just adding this printing doesn't solve the confusion IMHO (See my
> arguments below).
>
>
>  > For volume name generation, instead of (s3+md5hex(lowecAccesskey), we
> can
>  > replace all special characters not allowed in volume with "-" which
> is more
>  > readable than the current approach.
>  > Please let me know your thoughts.
>
>
> I would prefer to use a map instead of the hard coded rule between the
> access_key_id -> volume. If you use any user based transformation rules
> (with or without the md5hex) you can't share the same s3 bucket between
> two users.
>
> Let's say you have one user who writes to an s3 bucket and an other user
> reads it. If the volume name is generated from the user name they
> couldn't use the same bucket.
>
> My only improvement in this proposal is to define this mapping manually
> (which volume should be used?) for each the access_key_id.
>
> (And for technical reasons, it seems to be easier to do this with
> supporting multiple access_key_id to the same user, which is already
> supported by AWS).
>
> Marton
>
>
>
>
>
> On 3/2/20 8:20 AM, Bharat Viswanadham wrote:
> > Thank You, Marton, for starting this discussion and detailed proposal.
> > I have few comments, commented on the document and also have another
> > proposal that can be done with minimal code changes and also can avoid
> the
> > confusion of knowing what is the volume name needs to be used during
> O3fs.
> >
> > One approach I can think of is, we can move the volume generation also to
> > secret generation and print the volume name for that user along with
> > accessKey and access Secret, in this way volume name will be known to the
> > user during secret generation. (but only issue is this command needs to
> be
> > supported in non-secure also) This can be done with very less code
> changes.
> > (Proposal also needs this command to be supported in the secure cluster
> >
> > For volume name generation, instead of (s3+md5hex(lowecAccesskey), we can
> > replace all special characters not allowed in volume with "-" which is
> more
> > readable than the current approach.
> > Please let me know your thoughts.
> >
> > Thanks,
> > Bharat
> >
> >
> >
> > On Fri, Feb 28, 2020 at 11:43 AM Arpit Agarwal
> > <aa...@cloudera.com.invalid> wrote:
> >
> >> You can sign in with Cloudera Google account and comment.
> >>
> >>
> >>> On Feb 28, 2020, at 11:42 AM, Jitendra Pandey
> >> <ji...@cloudera.com.INVALID> wrote:
> >>>
> >>> Thanks for starting this thread Marton. We need to address this for
> >> better
> >>> usability.
> >>> How do I comment on the document? Do I need Hackmd account?
> >>>
> >>> On Fri, Feb 28, 2020 at 5:47 AM Elek, Marton <el...@apache.org> wrote:
> >>>
> >>>>
> >>>> Hi,
> >>>>
> >>>> We had multiple discussions earlier about simplifying s3_bucket ->
> >>>> ozone_volume/ozone_bucket mapping (or at least make it more opaque)
> >>>>
> >>>> I wrote a formal proposal about a very lightweight change:
> >>>>
> >>>> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view
> >>>>
> >>>> Please let me know what do you think...
> >>>>
> >>>>
> >>>> Thanks a lot,
> >>>> Marton
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> >>>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> >>>>
> >>>>
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> >> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> >>
> >>
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>
>

Re: [design] S3 access key management

Posted by "Elek, Marton" <el...@apache.org>.
Thank you the feedback, Bharat.

 > One approach I can think of is, we can move the volume generation also to
 > secret generation and print the volume name for that user along with
 > accessKey and access Secret, in this way volume name will be known to the
 > user during secret generation. (but only issue is this command needs 
to be
 > supported in non-secure also) This can be done with very less code 
changes.

Yes, I agree. We can print out the used volume during the secret creation.


But just adding this printing doesn't solve the confusion IMHO (See my 
arguments below).


 > For volume name generation, instead of (s3+md5hex(lowecAccesskey), we can
 > replace all special characters not allowed in volume with "-" which 
is more
 > readable than the current approach.
 > Please let me know your thoughts.


I would prefer to use a map instead of the hard coded rule between the 
access_key_id -> volume. If you use any user based transformation rules 
(with or without the md5hex) you can't share the same s3 bucket between 
two users.

Let's say you have one user who writes to an s3 bucket and an other user 
reads it. If the volume name is generated from the user name they 
couldn't use the same bucket.

My only improvement in this proposal is to define this mapping manually 
(which volume should be used?) for each the access_key_id.

(And for technical reasons, it seems to be easier to do this with 
supporting multiple access_key_id to the same user, which is already 
supported by AWS).

Marton





On 3/2/20 8:20 AM, Bharat Viswanadham wrote:
> Thank You, Marton, for starting this discussion and detailed proposal.
> I have few comments, commented on the document and also have another
> proposal that can be done with minimal code changes and also can avoid the
> confusion of knowing what is the volume name needs to be used during O3fs.
> 
> One approach I can think of is, we can move the volume generation also to
> secret generation and print the volume name for that user along with
> accessKey and access Secret, in this way volume name will be known to the
> user during secret generation. (but only issue is this command needs to be
> supported in non-secure also) This can be done with very less code changes.
> (Proposal also needs this command to be supported in the secure cluster
> 
> For volume name generation, instead of (s3+md5hex(lowecAccesskey), we can
> replace all special characters not allowed in volume with "-" which is more
> readable than the current approach.
> Please let me know your thoughts.
> 
> Thanks,
> Bharat
> 
> 
> 
> On Fri, Feb 28, 2020 at 11:43 AM Arpit Agarwal
> <aa...@cloudera.com.invalid> wrote:
> 
>> You can sign in with Cloudera Google account and comment.
>>
>>
>>> On Feb 28, 2020, at 11:42 AM, Jitendra Pandey
>> <ji...@cloudera.com.INVALID> wrote:
>>>
>>> Thanks for starting this thread Marton. We need to address this for
>> better
>>> usability.
>>> How do I comment on the document? Do I need Hackmd account?
>>>
>>> On Fri, Feb 28, 2020 at 5:47 AM Elek, Marton <el...@apache.org> wrote:
>>>
>>>>
>>>> Hi,
>>>>
>>>> We had multiple discussions earlier about simplifying s3_bucket ->
>>>> ozone_volume/ozone_bucket mapping (or at least make it more opaque)
>>>>
>>>> I wrote a formal proposal about a very lightweight change:
>>>>
>>>> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view
>>>>
>>>> Please let me know what do you think...
>>>>
>>>>
>>>> Thanks a lot,
>>>> Marton
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
>>>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>>>>
>>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>>
>>
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by Bharat Viswanadham <bv...@cloudera.com.INVALID>.
Thank You, Marton, for starting this discussion and detailed proposal.
I have few comments, commented on the document and also have another
proposal that can be done with minimal code changes and also can avoid the
confusion of knowing what is the volume name needs to be used during O3fs.

One approach I can think of is, we can move the volume generation also to
secret generation and print the volume name for that user along with
accessKey and access Secret, in this way volume name will be known to the
user during secret generation. (but only issue is this command needs to be
supported in non-secure also) This can be done with very less code changes.
(Proposal also needs this command to be supported in the secure cluster

For volume name generation, instead of (s3+md5hex(lowecAccesskey), we can
replace all special characters not allowed in volume with "-" which is more
readable than the current approach.
Please let me know your thoughts.

Thanks,
Bharat



On Fri, Feb 28, 2020 at 11:43 AM Arpit Agarwal
<aa...@cloudera.com.invalid> wrote:

> You can sign in with Cloudera Google account and comment.
>
>
> > On Feb 28, 2020, at 11:42 AM, Jitendra Pandey
> <ji...@cloudera.com.INVALID> wrote:
> >
> > Thanks for starting this thread Marton. We need to address this for
> better
> > usability.
> > How do I comment on the document? Do I need Hackmd account?
> >
> > On Fri, Feb 28, 2020 at 5:47 AM Elek, Marton <el...@apache.org> wrote:
> >
> >>
> >> Hi,
> >>
> >> We had multiple discussions earlier about simplifying s3_bucket ->
> >> ozone_volume/ozone_bucket mapping (or at least make it more opaque)
> >>
> >> I wrote a formal proposal about a very lightweight change:
> >>
> >> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view
> >>
> >> Please let me know what do you think...
> >>
> >>
> >> Thanks a lot,
> >> Marton
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> >> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
> >>
> >>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>
>

Re: [design] S3 access key management

Posted by Arpit Agarwal <aa...@cloudera.com.INVALID>.
You can sign in with Cloudera Google account and comment.


> On Feb 28, 2020, at 11:42 AM, Jitendra Pandey <ji...@cloudera.com.INVALID> wrote:
> 
> Thanks for starting this thread Marton. We need to address this for better
> usability.
> How do I comment on the document? Do I need Hackmd account?
> 
> On Fri, Feb 28, 2020 at 5:47 AM Elek, Marton <el...@apache.org> wrote:
> 
>> 
>> Hi,
>> 
>> We had multiple discussions earlier about simplifying s3_bucket ->
>> ozone_volume/ozone_bucket mapping (or at least make it more opaque)
>> 
>> I wrote a formal proposal about a very lightweight change:
>> 
>> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view
>> 
>> Please let me know what do you think...
>> 
>> 
>> Thanks a lot,
>> Marton
>> 
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
>> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>> 
>> 


---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-dev-help@hadoop.apache.org


Re: [design] S3 access key management

Posted by Jitendra Pandey <ji...@cloudera.com.INVALID>.
Thanks for starting this thread Marton. We need to address this for better
usability.
 How do I comment on the document? Do I need Hackmd account?

On Fri, Feb 28, 2020 at 5:47 AM Elek, Marton <el...@apache.org> wrote:

>
> Hi,
>
> We had multiple discussions earlier about simplifying s3_bucket ->
> ozone_volume/ozone_bucket mapping (or at least make it more opaque)
>
> I wrote a formal proposal about a very lightweight change:
>
> https://hackmd.io/uqSYkmd8SXGAAjQx3sObQg?view
>
> Please let me know what do you think...
>
>
> Thanks a lot,
> Marton
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ozone-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: ozone-dev-help@hadoop.apache.org
>
>