You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by Proven Provenzano <pp...@confluent.io.INVALID> on 2023/02/01 22:24:27 UTC

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Hi Colin,

I agree that the --alter flag is not useful. I just wanted to gauge how
close to the original command line we want.

I also think that positional args are not the way to go. Parsing can be
smarter than that.

Using --add-config to be the leed for setting dynamic config is fine. I
think disambiguating primary arguments from the arguments of a specific
config will be simple to understand. The '--' is for primary arguments to
the kafka-storage command. Multiple subarguments are possible for a single
--add-config each of which does not have a '--'. So like the following.

-add-config
'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
entity-type users entity-name alice

The following are also acceptable from your example. I changed the ordering
because it does't matter.

--add-config entity-type brokers entity-name 0 foo=bar
--add-config default-entity entity-type broker baaz=quux

The --add-config sub argument parsing knows about the possible sub fields
entity-type, entity-name, default-entity and the key=value indicates what
the config is applied to.

Thanks
--Proven

On Fri, Jan 27, 2023 at 6:20 PM Colin McCabe <cm...@apache.org> wrote:

> On Fri, Jan 27, 2023, at 11:25, Proven Provenzano wrote:
> > Hi Colin,
> >
> > So I think we are converging where we put the adding of records in
> > kafka-storage, allow for salt and iterations to be optional and now we
> are
> > getting to how to parse the individual arguments.
> >
> > My previous approach is to keep '--' as a delimiter for primary
> > kafka-storage arguments but to allow multiple space separated sub args
> > for
> > each 'scram-config' can contain multiple space separated key=value
> > pairs.
> >     --scram-config
> > user=alice 'SCRAM-SHA-256=[iterations=8192,password=alicepass]
> > It could also be like the following with sub parsing of arguments where
> > name requires an additional arg and config requires one too. This looks
> > more like kafka-configs.
> >    --scram-config entity-name alice config 'SCRAM-SHA-256=[
> > iterations=8192,password=alicepass]
> > If we really want to keep a similar parsing to kafka-configs we could
> > use '--alter'
> > as a primary argument like --config and --cluster-id and the other args
> > are
> > secondary arguments to --alter . The following now looks almost exactly
> > like kafka-configs. We would just ignore --entity-type as it must be of
> > type user.
> >   --alter --add-config
> >
> 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]'
> > --entity-type users --entity-name alice
> >
> > Thoughts?
> > --Proven
> >
>
> Hi Proven,
>
> I don't think it makes sense to have an --alter flag because we are not
> altering anything. We are creating a new bootstrap file.
>
> In general, we need to think about what else we will add to the formatting
> tool in the future. One thing I can think of is we will probably need to
> add the ability to set dynamic configurations. So we probably will have
> something like "--add-config key=value". Then, of course, the question
> becomes how to supply the entity-type and entity-name. We cannot really do
> it exactly like kafka-configs.sh does it, because that tool only accepts
> one alteration at a time, whereas we may have to do many.
>
> If we are willing to enforce ordering, we could do something like this:
> --entity-type brokers --entity-name 0 --add-config foo=bar
> --entity-type brokers --default-entity --add-config baaz=quux
>
> If we go that route, then maybe it makes sense to do SCRAM with a similar
> paradigm. So something like:
> --entity-type users --entity-name alice --add-config
> 'SCRAM-SHA-512=[password=alice-secret]'
>
> I do think this is doable with argparse4j. What do you think?
>
> best,
> Colin
>
>
> >
> >
> > On Tue, Jan 24, 2023 at 12:22 AM Colin McCabe <cm...@apache.org>
> wrote:
> >
> >> On Fri, Jan 20, 2023, at 13:02, Proven Provenzano wrote:
> >> > Hi Colin,
> >> > Thanks for the response.
> >> >
> >> > I chose raw records, thinking it might be useful for future additions
> of
> >> > records that customers might want to add before the first start of the
> >> > cluster. I do see that it is at best an engineer friendly interface.
> >> >
> >> > I do think kafka-storage is the correct place to put the logic for
> adding
> >> > records to the bootstrap.checkpoint file. I think keeping the logic
> for
> >> > managing the bootstrap separate from the logic of configuring an
> existing
> >> > cluster that is already running is a good division of functionality
> and I
> >> > think this separation will reduce the parsing logic significantly.
> >>
> >> Hi Proven,
> >>
> >> I suspect having the separation will increase the parsing logic you have
> >> to write rather than reducing it, since it will be more difficult to
> reuse
> >> the logic that is already in kafka-configs.sh.
> >>
> >> I do agree that having multiple commands modifying the bootstrap may be
> >> confusing, though. So maybe it's best to keep it all in the formatting
> >> tool, as you say.
> >>
> >> >
> >> > The API suggestion you made for kafka-storage is okay. I would prefer
> to
> >> > have one option for an entire SCRAM config including the user, such as
> >> the
> >> > following:
> >> >
> >>
> >> Hmm... I don't think having a single giant string for all the SCRAM
> >> configurations makes sense. At minimum each SCRAM user should get its
> own
> >> string. Another reason for doing it this way is that this is how
> >> kafka-configs.sh works, and we don't want two different incompatible
> ways
> >> of specifying SCRAM users on the command line.
> >>
> >> > I think adding the Argparse4j support for reading the arguments from a
> >> file
> >> > is a must.
> >>
> >> Yeah, agreed.
> >>
> >> best,
> >> Colin
> >>
> >> >
> >> > --Proven
> >> >
> >> >
> >> > On Thu, Jan 19, 2023 at 7:07 PM Colin McCabe <cm...@apache.org>
> wrote:
> >> >
> >> >> Hi Proven,
> >> >>
> >> >> Thanks for putting this together.
> >> >>
> >> >> We always intended to have a way to bootstrap into using an all-SCRAM
> >> >> cluster, from scratch.
> >> >>
> >> >> I have two big comments here. First, I think we need a better
> interface
> >> >> than raw records. And second, I'm not sure that kafka-storage.sh is
> the
> >> >> right place to put this.
> >> >>
> >> >> I think raw records are going to be tough for people to use, because
> >> there
> >> >> are a lot of fields, and the values to set them to are not intuitive.
> >> For
> >> >> example, to set SHA512, the user needs to set "mechanism" equal to 2.
> >> That
> >> >> is going to be impossible to remember or figure out without looking
> at
> >> the
> >> >> source code. The other thing of course is that we may add more fields
> >> over
> >> >> time, including mandatory ones. So any documentation could quickly
> get
> >> out
> >> >> of date.
> >> >>
> >> >> I think people are going to want to specify SCRAM users here the same
> >> way
> >> >> they do when using the kafka-configs.sh tool. As a reminder, using
> >> >> kafka-configs.sh, they specify users like this:
> >> >>
> >> >> ./bin/kafka-configs --bootstrap-server localhost:9092 --alter \
> >> >>   --add-config 'SCRAM-SHA-256=[iterations=8192,password=pass]' \
> >> >>   --entity-type users \
> >> >>   --entity-name alice
> >> >>
> >> >> Of course, in this example, we're not specifying a salt. So we'd
> have to
> >> >> evaluate whether that's what we want for our use-case as well. On the
> >> plus
> >> >> side, specifying a salt could ensure that the bootstrap files end up
> >> >> identical on every node. On the minus side, it is another random
> number
> >> >> that users would need to generate and explicitly pass in.
> >> >>
> >> >> I would lean towards auto-generating the salt. I don't think the salt
> >> >> needs to be the same on all nodes. Only one controller will become
> >> active
> >> >> and write the bootstrap records to the log; no other controllers
> will do
> >> >> that. Brokers don't need to read the SCRAM records out of the
> bootstrap
> >> >> file.
> >> >>
> >> >> If we put all the functionality into kafka-storage.sh, it might look
> >> >> something like this:
> >> >>
> >> >> ./bin/kafka-storage.sh format \
> >> >>   --config [my-config-path] \
> >> >>   --cluster-id mb0Zz1YPTUeVzpedHHPT-Q \
> >> >>   --release-version 3.5-IV0 \
> >> >>   --scram-user alice \
> >> >>   --scram-config
> 'SCRAM-SHA-256=[iterations=8192,password=alicepass]' \
> >> >>   --scram-user bob \
> >> >>   --scram-config 'SCRAM-SHA-256=[password=bobpass]'
> >> >>
> >> >> (Here I am assuming that each --scram-user must be followed by
> exactly
> >> on
> >> >> --scram-config line)
> >> >>
> >> >> Perhaps it's worth considering whether it woudl be better to add a
> mode
> >> to
> >> >> kafka-configs.sh where it appends to a bootstrap file.
> >> >>
> >> >> If we do put everything into kafka-storage.sh, we should consider the
> >> >> plight of people with low limits on the maximum length of their
> command
> >> >> lines. One fix for these people could be allowing them to read their
> >> >> arguments from a file like this:
> >> >>
> >> >> $ ./bin/kafka-storage.sh @myfile
> >> >> $ cat myfile:
> >> >>   ./bin/kafka-storage.sh format \
> >> >>     --config [my-config-path] \
> >> >>   ...
> >> >> [etc, etc.]
> >> >>
> >> >> Argparse4j supports this natively with fromFilePrefix. See
> >> >> https://argparse4j.github.io/usage.html#fromfileprefix
> >> >>
> >> >> best,
> >> >> Colin
> >> >>
> >> >>
> >> >> On Thu, Jan 19, 2023, at 11:08, Proven Provenzano wrote:
> >> >> > I have written a KIP describing the API additions needed to
> >> >> > kafka-storage
> >> >> > to store SCRAM
> >> >> > credentials at bootstrap time. Please take a look at
> >> >> >
> >> >>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-900%3A+KRaft+kafka-storage.sh+API+additions+to+support+SCRAM+for+Kafka+Brokers
> >> >> >
> >> >> > --
> >> >> > --Proven
> >> >>
> >>
>

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by José Armando García Sancio <js...@confluent.io.INVALID>.
Comments below.

On Wed, Feb 1, 2023 at 2:24 PM Proven Provenzano
<pp...@confluent.io.invalid> wrote:
> The following are also acceptable from your example. I changed the ordering
> because it does't matter.
>
> --add-config entity-type brokers entity-name 0 foo=bar
> --add-config default-entity entity-type broker baaz=quux
>
> The --add-config sub argument parsing knows about the possible sub fields
> entity-type, entity-name, default-entity and the key=value indicates what
> the config is applied to.

Are you saying that --add-config will have 3 or 4 arguments. The 3
argument form is for the default entity and the 4 argument form is for
a specific entity.
Are you planning to allow the user to configure any entity and
key-value? Does this mean that you need to implement the dynamic
configuration validation logic in the storage-tool?

-- 
-José

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by Proven Provenzano <pp...@confluent.io.INVALID>.
Hi Luke,

1. I'm going to assume that migration from ZK to KRaft will sync the info
from ZK into the metadata log.
There will be no need to bootstrap SCRAM for KRaft in the migration case. I
will need to test this.

 2. Done! I've also added a comment to each case so people understand the
change.

Thanks
--Proven



On Wed, Feb 22, 2023 at 5:46 AM Luke Chen <sh...@gmail.com> wrote:

> Hi Proven,
>
> Thanks for the KIP.
>
> Some questions:
> 1. During ZK migrating to KRaft, should we format the KRaft controller with
> the SCRAM info, or it'll sync from ZK?
> 2. In the KIP, you provided a link to a doc from Confluent (i.e. this
> line: See
> Configuring SCRAM
> <
> https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html#configuring-scram
> >
> for details), could you change it into official Kafka doc link? That is
> this one: https://kafka.apache.org/documentation/#security_sasl_scram
>
> Thank you.
> Luke
>
>
> On Wed, Feb 22, 2023 at 6:17 AM José Armando García Sancio
> <js...@confluent.io.invalid> wrote:
>
> > Hi Proven,
> >
> > On Tue, Feb 21, 2023 at 1:37 PM Proven Provenzano
> > <pp...@confluent.io.invalid> wrote:
> > >
> > > Hi Jose,
> > >
> > > 1. The SCRAM in SCRAM-SHA-256 is required as the mechanism name is
> > > SCRAM-SHA-256.
> > >     I do realize there is a bit of redundancy here.
> > >
> > > 2. I'll add documentation for all the possible values. They are
> > > SCRAM-SHA-256 and SCRAM-SHA-512.
> > >
> > > 3. I'd like to keep it with a capital letter as it is a specific type
> of
> > > message we want to insert. I do agree that -A
> > >     is not the correct choice so I'll suggest -S for SCRAM and that
> > leaves
> > > -A for --add-acl and -a for a generic
> > >     --add-config in the future.
> > >
> >
> > It makes sense to me. Thanks for the changes.
> >
> > --
> > -José
> >
>

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by Luke Chen <sh...@gmail.com>.
Hi Proven,

Thanks for the KIP.

Some questions:
1. During ZK migrating to KRaft, should we format the KRaft controller with
the SCRAM info, or it'll sync from ZK?
2. In the KIP, you provided a link to a doc from Confluent (i.e. this line: See
Configuring SCRAM
<https://docs.confluent.io/platform/current/kafka/authentication_sasl/authentication_sasl_scram.html#configuring-scram>
for details), could you change it into official Kafka doc link? That is
this one: https://kafka.apache.org/documentation/#security_sasl_scram

Thank you.
Luke


On Wed, Feb 22, 2023 at 6:17 AM José Armando García Sancio
<js...@confluent.io.invalid> wrote:

> Hi Proven,
>
> On Tue, Feb 21, 2023 at 1:37 PM Proven Provenzano
> <pp...@confluent.io.invalid> wrote:
> >
> > Hi Jose,
> >
> > 1. The SCRAM in SCRAM-SHA-256 is required as the mechanism name is
> > SCRAM-SHA-256.
> >     I do realize there is a bit of redundancy here.
> >
> > 2. I'll add documentation for all the possible values. They are
> > SCRAM-SHA-256 and SCRAM-SHA-512.
> >
> > 3. I'd like to keep it with a capital letter as it is a specific type of
> > message we want to insert. I do agree that -A
> >     is not the correct choice so I'll suggest -S for SCRAM and that
> leaves
> > -A for --add-acl and -a for a generic
> >     --add-config in the future.
> >
>
> It makes sense to me. Thanks for the changes.
>
> --
> -José
>

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by José Armando García Sancio <js...@confluent.io.INVALID>.
Hi Proven,

On Tue, Feb 21, 2023 at 1:37 PM Proven Provenzano
<pp...@confluent.io.invalid> wrote:
>
> Hi Jose,
>
> 1. The SCRAM in SCRAM-SHA-256 is required as the mechanism name is
> SCRAM-SHA-256.
>     I do realize there is a bit of redundancy here.
>
> 2. I'll add documentation for all the possible values. They are
> SCRAM-SHA-256 and SCRAM-SHA-512.
>
> 3. I'd like to keep it with a capital letter as it is a specific type of
> message we want to insert. I do agree that -A
>     is not the correct choice so I'll suggest -S for SCRAM and that leaves
> -A for --add-acl and -a for a generic
>     --add-config in the future.
>

It makes sense to me. Thanks for the changes.

-- 
-José

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by Proven Provenzano <pp...@confluent.io.INVALID>.
Hi Jose,

1. The SCRAM in SCRAM-SHA-256 is required as the mechanism name is
SCRAM-SHA-256.
    I do realize there is a bit of redundancy here.

2. I'll add documentation for all the possible values. They are
SCRAM-SHA-256 and SCRAM-SHA-512.

3. I'd like to keep it with a capital letter as it is a specific type of
message we want to insert. I do agree that -A
    is not the correct choice so I'll suggest -S for SCRAM and that leaves
-A for --add-acl and -a for a generic
    --add-config in the future.

Thanks
--Proven

On Fri, Feb 17, 2023 at 1:32 PM José Armando García Sancio
<js...@confluent.io.invalid> wrote:

> Hi Proven,
>
> Thanks for the changes to KIP-900. It looks good to me in general.
> Here are some suggestions and questions.
>
> 1. In the KIP you give the following example:
> --add-scram SCRAM-SHA-256=[user=alice,password=alice-secret]
>
> Is "SCRAM-" required as a prefix? The flag already has the suffix
> "-scram". Can the value to the flag be
> SHA-256=[user=alice,password=alice-secret]?
>
> 2. Should the CLI document all possible values for the --add-scram? Is
> SCRAM-SHA-256 the only supported algorithm?
>
> 3. Should the short version of the flag --add-scram be -s? I suspect
> that in the future we may want to add more options like --add-acl and
> --add-config.
>
> Thanks!
> --
> -José
>

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by José Armando García Sancio <js...@confluent.io.INVALID>.
Hi Proven,

Thanks for the changes to KIP-900. It looks good to me in general.
Here are some suggestions and questions.

1. In the KIP you give the following example:
--add-scram SCRAM-SHA-256=[user=alice,password=alice-secret]

Is "SCRAM-" required as a prefix? The flag already has the suffix
"-scram". Can the value to the flag be
SHA-256=[user=alice,password=alice-secret]?

2. Should the CLI document all possible values for the --add-scram? Is
SCRAM-SHA-256 the only supported algorithm?

3. Should the short version of the flag --add-scram be -s? I suspect
that in the future we may want to add more options like --add-acl and
--add-config.

Thanks!
-- 
-José

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by Proven Provenzano <pp...@confluent.io.INVALID>.
The original idea was to pass in the JSON string representing the
UserScramCredentialsRecord
directly to make this simple and to require no parsing at all. Here is a
example of the JSON object:
{"name":"alice","mechanism":1,"salt":"MWx2NHBkbnc0ZndxN25vdGN4bTB5eTFrN3E=","SaltedPassword":"mT0yyUUxnlJaC99HXgRTSYlbuqa4FSGtJCJfTMvjYCE=","iterations":8192}
Note that it isn't very friendly. The mechanism is an integer value 1 or 2
not a enum such as
SCRAM-SHA-256 or SCRAM-SHA-512, The salt and iterations are required and
there
is no password, just a SaltedPassword which the customer would have to
generate externally.

Moving away from the above we will have to parse and validate arguments and
then from that generate
the UserScramCredentialsRecord. The question is what that looks like.
Should it be closer to
what kafha-configs uses or should it be our own made up JSON format?
Whichever we chose, one format
should be sufficient as it will only be used in this application.

The requirements so far for argument parsing are:

   - We want to specify the mechanism is a customer friendly enum
   SCRAM-SHA-256 or SCRAM-SHA-512
   - We want the salt and iterations to be optional and have a default if
   not specified.
   - We want the customer to be able to specify a password which then
   generates the salted password.
   - We want to allow the customer to specify a salted password if they so
   choose.
   - We want the user to specify a user for the credentials orto  specify
   default-entity.

This is on top of the arguments needed for kafka-storage. We should also
look forward to when we add additional
record types that need to be parsed and stored for bootstrap.

What I am suggesting is that we have an --add-config argument that requires
at least one key=value subargument
which indicates which record type to add. An example would be
SCRAM-SHA-256=[iterations=8192,password=alice-secret]
This indicates that the record to add is a UserScramCredentialsRecord with
mechanism 1 (SCRAM-SHA-256)
and there are some key values to add to the record. This is very much like
kafka-config. Note that this record is still incomplete
and that we need to specify a user to apply it to and that is where entity-type
users entity-name alice subarguments are needed.
If during parsing of the arguments the record is incomplete, then
kafka-storage will exit with a failure.

--Proven



On Mon, Feb 13, 2023 at 4:54 PM José Armando García Sancio
<js...@confluent.io.invalid> wrote:

> Comments below.
>
> On Mon, Feb 13, 2023 at 11:44 AM Proven Provenzano
> <pp...@confluent.io.invalid> wrote:
> >
> > Hi Jose
> >
> > I want to clarify that the file parsing that Argparse4j provides is just
> a
> > mechanism for
> > taking command line args and putting them in a file. It doesn't
> > actually change what the
> > command line args are for processing the file. So I can add any
> > kafka-storage command
> > line arg into the file including say the storage UUID. I see that the
> file
> > parsing will be useful
> > in the future as we add more record types to add for the bootstrap
> process.
>
> Understood.
>
> > I'm against adding specific parsing for a list of configs vs. a separate
> > JSON file as it is adding
> > more surface area that needs testing for a feature that is used
> > infrequently. One config method
> > should be sufficient for one or more SCRAM records that a customer wants
> to
> > bootstrap with.
>
> Does this mean that the storage tool won't parse and validate SCRAM
> configuration? How will the user know that their SCRAM configuration
> is correct? Do they need to start the cluster to discover if their
> SCRAM configuration is correct?
>
> Thanks,
> --
> -José
>

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by José Armando García Sancio <js...@confluent.io.INVALID>.
Comments below.

On Mon, Feb 13, 2023 at 11:44 AM Proven Provenzano
<pp...@confluent.io.invalid> wrote:
>
> Hi Jose
>
> I want to clarify that the file parsing that Argparse4j provides is just a
> mechanism for
> taking command line args and putting them in a file. It doesn't
> actually change what the
> command line args are for processing the file. So I can add any
> kafka-storage command
> line arg into the file including say the storage UUID. I see that the file
> parsing will be useful
> in the future as we add more record types to add for the bootstrap process.

Understood.

> I'm against adding specific parsing for a list of configs vs. a separate
> JSON file as it is adding
> more surface area that needs testing for a feature that is used
> infrequently. One config method
> should be sufficient for one or more SCRAM records that a customer wants to
> bootstrap with.

Does this mean that the storage tool won't parse and validate SCRAM
configuration? How will the user know that their SCRAM configuration
is correct? Do they need to start the cluster to discover if their
SCRAM configuration is correct?

Thanks,
-- 
-José

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by Proven Provenzano <pp...@confluent.io.INVALID>.
Hi Jose

I want to clarify that the file parsing that Argparse4j provides is just a
mechanism for
taking command line args and putting them in a file. It doesn't
actually change what the
command line args are for processing the file. So I can add any
kafka-storage command
line arg into the file including say the storage UUID. I see that the file
parsing will be useful
in the future as we add more record types to add for the bootstrap process.

I'm against adding specific parsing for a list of configs vs. a separate
JSON file as it is adding
more surface area that needs testing for a feature that is used
infrequently. One config method
should be sufficient for one or more SCRAM records that a customer wants to
bootstrap with.

Thanks for the feedback
--Proven


On Mon, Feb 13, 2023 at 1:56 PM José Armando García Sancio
<js...@confluent.io.invalid> wrote:

> Thanks for the discussion Colin and Proven.
>
> CLIs can be difficult to design when there are complicated use cases.
> Did we consider having CLI flags only for the common case? I would
> think that the common case is SCRAM for one user.
>
> For the complicated and less common cases they have to provide a
> "configuration file" that contains all of the SCRAM users and their
> passwords.
>
> The common case could be:
> --scram-user alice
> --scram-algo SHA-256
> --scram-password ...
> --scram-salt ...
> --scram-iterations ...
>
> If the user wants to provide multiple scram configurations they need
> to use --scram-config <path-to-file>. The file would be human readable
> (maybe JSON or .properties) and contains the scram configuration for
> multiple users.
>
> --
> -José
>

Re: [DISCUSS] KIP-900: KRaft kafka-storage.sh API additions to support SCRAM

Posted by José Armando García Sancio <js...@confluent.io.INVALID>.
Thanks for the discussion Colin and Proven.

CLIs can be difficult to design when there are complicated use cases.
Did we consider having CLI flags only for the common case? I would
think that the common case is SCRAM for one user.

For the complicated and less common cases they have to provide a
"configuration file" that contains all of the SCRAM users and their
passwords.

The common case could be:
--scram-user alice
--scram-algo SHA-256
--scram-password ...
--scram-salt ...
--scram-iterations ...

If the user wants to provide multiple scram configurations they need
to use --scram-config <path-to-file>. The file would be human readable
(maybe JSON or .properties) and contains the scram configuration for
multiple users.

-- 
-José