You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by Chesnay Schepler <ch...@apache.org> on 2022/12/01 15:01:43 UTC

[DISCUSS] Retroactively externalize some connectors for 1.16

Hello,

let me clarify the title first.

In the original proposal for the connector externalization we said that 
an externalized connector has to exist in parallel with the version 
shipped in the main Flink release for 1 cycle.

For example, 1.16.0 shipped with the elasticsearch connector, but at the 
same time there's the externalized variant as a drop-in replacement, and 
the 1.17.0 release will not include a ES connector.

The rational was to give users some window to update their projects.


We are now about to externalize a few more connectors (cassandra, 
pulsar, jdbc), targeting 1.16 within the next week.
The 1.16.0 release has now been about a month ago; so it hasn't been a 
lot of time since then.
I'm now wondering if we could/should treat these connectors as 
externalized for 1.16, meaning that we would remove them from the master 
branch now, not ship them in 1.17 and move all further development into 
the connector repos.

The main benefit is that we won't have to bother with syncing changes 
across repos all the time.

We would of course need some sort-of cutoff date for this (December 
9th?), to ensure there's still some reasonably large gap left for users 
to migrate.

Let me know what you think.

Regards,
Chesnay


Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Mason Chen <ma...@gmail.com>.
Hi all,

+1 (non-binding), I agree that syncing the changes going forward would be a
huge effort and a cutoff date makes sense.

Best,
Mason

On Tue, Dec 6, 2022 at 12:10 AM Ryan Skraba <ry...@aiven.io.invalid>
wrote:

> Hello -- this makes sense to me: removing connectors from 1.17 (but not the
> 1.16 branch) will still give users a long time to migrate.
>
> +1 (non-binding)
>
> Ryan
>
> On Fri, Dec 2, 2022 at 11:42 AM Dong Lin <li...@gmail.com> wrote:
>
> > Sounds good!
> >
> > +1
> >
> > On Fri, Dec 2, 2022 at 5:58 PM Chesnay Schepler <ch...@apache.org>
> > wrote:
> >
> > > Dec 9th is just a suggestion; the idea being to have a date that covers
> > > connectors that are being released right now, while enforcing some
> > > migration window.
> > >
> > > We will not reserve time for such a verification. Release testing is
> > > meant to achieve that.
> > > Since 1.16.x is unaffected by the removal from the master branch there
> > > is no risk to existing deployments, while 1.17 is still quite a bit
> away.
> > >
> > > On 02/12/2022 02:11, Dong Lin wrote:
> > > > Hello Chesney,
> > > >
> > > > The overall plan sounds good! Just to double check, is Dec 9th the
> > > proposed
> > > > cutoff date for the release of those externalized connectors?
> > > >
> > > > Also, will we reserve time for users to verify that the drop-in
> > > replacement
> > > > from Flink 1.16 to those externalized connectors can work as expected
> > > > before removing their code from the master branch?
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > >
> > > > On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler <chesnay@apache.org
> >
> > > wrote:
> > > >
> > > >> Hello,
> > > >>
> > > >> let me clarify the title first.
> > > >>
> > > >> In the original proposal for the connector externalization we said
> > that
> > > >> an externalized connector has to exist in parallel with the version
> > > >> shipped in the main Flink release for 1 cycle.
> > > >>
> > > >> For example, 1.16.0 shipped with the elasticsearch connector, but at
> > the
> > > >> same time there's the externalized variant as a drop-in replacement,
> > and
> > > >> the 1.17.0 release will not include a ES connector.
> > > >>
> > > >> The rational was to give users some window to update their projects.
> > > >>
> > > >>
> > > >> We are now about to externalize a few more connectors (cassandra,
> > > >> pulsar, jdbc), targeting 1.16 within the next week.
> > > >> The 1.16.0 release has now been about a month ago; so it hasn't
> been a
> > > >> lot of time since then.
> > > >> I'm now wondering if we could/should treat these connectors as
> > > >> externalized for 1.16, meaning that we would remove them from the
> > master
> > > >> branch now, not ship them in 1.17 and move all further development
> > into
> > > >> the connector repos.
> > > >>
> > > >> The main benefit is that we won't have to bother with syncing
> changes
> > > >> across repos all the time.
> > > >>
> > > >> We would of course need some sort-of cutoff date for this (December
> > > >> 9th?), to ensure there's still some reasonably large gap left for
> > users
> > > >> to migrate.
> > > >>
> > > >> Let me know what you think.
> > > >>
> > > >> Regards,
> > > >> Chesnay
> > > >>
> > > >>
> > >
> > >
> >
>

Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Ryan Skraba <ry...@aiven.io.INVALID>.
Hello -- this makes sense to me: removing connectors from 1.17 (but not the
1.16 branch) will still give users a long time to migrate.

+1 (non-binding)

Ryan

On Fri, Dec 2, 2022 at 11:42 AM Dong Lin <li...@gmail.com> wrote:

> Sounds good!
>
> +1
>
> On Fri, Dec 2, 2022 at 5:58 PM Chesnay Schepler <ch...@apache.org>
> wrote:
>
> > Dec 9th is just a suggestion; the idea being to have a date that covers
> > connectors that are being released right now, while enforcing some
> > migration window.
> >
> > We will not reserve time for such a verification. Release testing is
> > meant to achieve that.
> > Since 1.16.x is unaffected by the removal from the master branch there
> > is no risk to existing deployments, while 1.17 is still quite a bit away.
> >
> > On 02/12/2022 02:11, Dong Lin wrote:
> > > Hello Chesney,
> > >
> > > The overall plan sounds good! Just to double check, is Dec 9th the
> > proposed
> > > cutoff date for the release of those externalized connectors?
> > >
> > > Also, will we reserve time for users to verify that the drop-in
> > replacement
> > > from Flink 1.16 to those externalized connectors can work as expected
> > > before removing their code from the master branch?
> > >
> > > Thanks,
> > > Dong
> > >
> > >
> > > On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler <ch...@apache.org>
> > wrote:
> > >
> > >> Hello,
> > >>
> > >> let me clarify the title first.
> > >>
> > >> In the original proposal for the connector externalization we said
> that
> > >> an externalized connector has to exist in parallel with the version
> > >> shipped in the main Flink release for 1 cycle.
> > >>
> > >> For example, 1.16.0 shipped with the elasticsearch connector, but at
> the
> > >> same time there's the externalized variant as a drop-in replacement,
> and
> > >> the 1.17.0 release will not include a ES connector.
> > >>
> > >> The rational was to give users some window to update their projects.
> > >>
> > >>
> > >> We are now about to externalize a few more connectors (cassandra,
> > >> pulsar, jdbc), targeting 1.16 within the next week.
> > >> The 1.16.0 release has now been about a month ago; so it hasn't been a
> > >> lot of time since then.
> > >> I'm now wondering if we could/should treat these connectors as
> > >> externalized for 1.16, meaning that we would remove them from the
> master
> > >> branch now, not ship them in 1.17 and move all further development
> into
> > >> the connector repos.
> > >>
> > >> The main benefit is that we won't have to bother with syncing changes
> > >> across repos all the time.
> > >>
> > >> We would of course need some sort-of cutoff date for this (December
> > >> 9th?), to ensure there's still some reasonably large gap left for
> users
> > >> to migrate.
> > >>
> > >> Let me know what you think.
> > >>
> > >> Regards,
> > >> Chesnay
> > >>
> > >>
> >
> >
>

Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Dong Lin <li...@gmail.com>.
Sounds good!

+1

On Fri, Dec 2, 2022 at 5:58 PM Chesnay Schepler <ch...@apache.org> wrote:

> Dec 9th is just a suggestion; the idea being to have a date that covers
> connectors that are being released right now, while enforcing some
> migration window.
>
> We will not reserve time for such a verification. Release testing is
> meant to achieve that.
> Since 1.16.x is unaffected by the removal from the master branch there
> is no risk to existing deployments, while 1.17 is still quite a bit away.
>
> On 02/12/2022 02:11, Dong Lin wrote:
> > Hello Chesney,
> >
> > The overall plan sounds good! Just to double check, is Dec 9th the
> proposed
> > cutoff date for the release of those externalized connectors?
> >
> > Also, will we reserve time for users to verify that the drop-in
> replacement
> > from Flink 1.16 to those externalized connectors can work as expected
> > before removing their code from the master branch?
> >
> > Thanks,
> > Dong
> >
> >
> > On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler <ch...@apache.org>
> wrote:
> >
> >> Hello,
> >>
> >> let me clarify the title first.
> >>
> >> In the original proposal for the connector externalization we said that
> >> an externalized connector has to exist in parallel with the version
> >> shipped in the main Flink release for 1 cycle.
> >>
> >> For example, 1.16.0 shipped with the elasticsearch connector, but at the
> >> same time there's the externalized variant as a drop-in replacement, and
> >> the 1.17.0 release will not include a ES connector.
> >>
> >> The rational was to give users some window to update their projects.
> >>
> >>
> >> We are now about to externalize a few more connectors (cassandra,
> >> pulsar, jdbc), targeting 1.16 within the next week.
> >> The 1.16.0 release has now been about a month ago; so it hasn't been a
> >> lot of time since then.
> >> I'm now wondering if we could/should treat these connectors as
> >> externalized for 1.16, meaning that we would remove them from the master
> >> branch now, not ship them in 1.17 and move all further development into
> >> the connector repos.
> >>
> >> The main benefit is that we won't have to bother with syncing changes
> >> across repos all the time.
> >>
> >> We would of course need some sort-of cutoff date for this (December
> >> 9th?), to ensure there's still some reasonably large gap left for users
> >> to migrate.
> >>
> >> Let me know what you think.
> >>
> >> Regards,
> >> Chesnay
> >>
> >>
>
>

Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Chesnay Schepler <ch...@apache.org>.
Dec 9th is just a suggestion; the idea being to have a date that covers 
connectors that are being released right now, while enforcing some 
migration window.

We will not reserve time for such a verification. Release testing is 
meant to achieve that.
Since 1.16.x is unaffected by the removal from the master branch there 
is no risk to existing deployments, while 1.17 is still quite a bit away.

On 02/12/2022 02:11, Dong Lin wrote:
> Hello Chesney,
>
> The overall plan sounds good! Just to double check, is Dec 9th the proposed
> cutoff date for the release of those externalized connectors?
>
> Also, will we reserve time for users to verify that the drop-in replacement
> from Flink 1.16 to those externalized connectors can work as expected
> before removing their code from the master branch?
>
> Thanks,
> Dong
>
>
> On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler <ch...@apache.org> wrote:
>
>> Hello,
>>
>> let me clarify the title first.
>>
>> In the original proposal for the connector externalization we said that
>> an externalized connector has to exist in parallel with the version
>> shipped in the main Flink release for 1 cycle.
>>
>> For example, 1.16.0 shipped with the elasticsearch connector, but at the
>> same time there's the externalized variant as a drop-in replacement, and
>> the 1.17.0 release will not include a ES connector.
>>
>> The rational was to give users some window to update their projects.
>>
>>
>> We are now about to externalize a few more connectors (cassandra,
>> pulsar, jdbc), targeting 1.16 within the next week.
>> The 1.16.0 release has now been about a month ago; so it hasn't been a
>> lot of time since then.
>> I'm now wondering if we could/should treat these connectors as
>> externalized for 1.16, meaning that we would remove them from the master
>> branch now, not ship them in 1.17 and move all further development into
>> the connector repos.
>>
>> The main benefit is that we won't have to bother with syncing changes
>> across repos all the time.
>>
>> We would of course need some sort-of cutoff date for this (December
>> 9th?), to ensure there's still some reasonably large gap left for users
>> to migrate.
>>
>> Let me know what you think.
>>
>> Regards,
>> Chesnay
>>
>>


Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Dong Lin <li...@gmail.com>.
Hello Chesney,

The overall plan sounds good! Just to double check, is Dec 9th the proposed
cutoff date for the release of those externalized connectors?

Also, will we reserve time for users to verify that the drop-in replacement
from Flink 1.16 to those externalized connectors can work as expected
before removing their code from the master branch?

Thanks,
Dong


On Thu, Dec 1, 2022 at 11:01 PM Chesnay Schepler <ch...@apache.org> wrote:

> Hello,
>
> let me clarify the title first.
>
> In the original proposal for the connector externalization we said that
> an externalized connector has to exist in parallel with the version
> shipped in the main Flink release for 1 cycle.
>
> For example, 1.16.0 shipped with the elasticsearch connector, but at the
> same time there's the externalized variant as a drop-in replacement, and
> the 1.17.0 release will not include a ES connector.
>
> The rational was to give users some window to update their projects.
>
>
> We are now about to externalize a few more connectors (cassandra,
> pulsar, jdbc), targeting 1.16 within the next week.
> The 1.16.0 release has now been about a month ago; so it hasn't been a
> lot of time since then.
> I'm now wondering if we could/should treat these connectors as
> externalized for 1.16, meaning that we would remove them from the master
> branch now, not ship them in 1.17 and move all further development into
> the connector repos.
>
> The main benefit is that we won't have to bother with syncing changes
> across repos all the time.
>
> We would of course need some sort-of cutoff date for this (December
> 9th?), to ensure there's still some reasonably large gap left for users
> to migrate.
>
> Let me know what you think.
>
> Regards,
> Chesnay
>
>

Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Chesnay Schepler <ch...@apache.org>.
The list of connectors were just those that I'm aware of. Any connector 
that meets the deadline would be included in this proposal.

On 01/12/2022 22:30, Ferenc Csaky wrote:
> Hi!
>
> I think this would be a good idea. I was wondering that could we include the hbase connector to this group as well? The externalization PR [1] should be in a good shape now and Dec 9th as a release date sounds doable.
>
> WDYT?
>
> [1] https://github.com/apache/flink-connector-hbase/pull/2
>
> Best,
> F
>
>
>
>
> ------- Original Message -------
> On Thursday, December 1st, 2022 at 16:01, Chesnay Schepler <ch...@apache.org> wrote:
>
>
>>
>> Hello,
>>
>> let me clarify the title first.
>>
>> In the original proposal for the connector externalization we said that
>> an externalized connector has to exist in parallel with the version
>> shipped in the main Flink release for 1 cycle.
>>
>> For example, 1.16.0 shipped with the elasticsearch connector, but at the
>> same time there's the externalized variant as a drop-in replacement, and
>> the 1.17.0 release will not include a ES connector.
>>
>> The rational was to give users some window to update their projects.
>>
>>
>> We are now about to externalize a few more connectors (cassandra,
>> pulsar, jdbc), targeting 1.16 within the next week.
>> The 1.16.0 release has now been about a month ago; so it hasn't been a
>> lot of time since then.
>> I'm now wondering if we could/should treat these connectors as
>> externalized for 1.16, meaning that we would remove them from the master
>> branch now, not ship them in 1.17 and move all further development into
>> the connector repos.
>>
>> The main benefit is that we won't have to bother with syncing changes
>> across repos all the time.
>>
>> We would of course need some sort-of cutoff date for this (December
>> 9th?), to ensure there's still some reasonably large gap left for users
>> to migrate.
>>
>> Let me know what you think.
>>
>> Regards,
>> Chesnay



Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Ferenc Csaky <fe...@pm.me.INVALID>.
Hi!

I think this would be a good idea. I was wondering that could we include the hbase connector to this group as well? The externalization PR [1] should be in a good shape now and Dec 9th as a release date sounds doable.

WDYT?

[1] https://github.com/apache/flink-connector-hbase/pull/2

Best,
F




------- Original Message -------
On Thursday, December 1st, 2022 at 16:01, Chesnay Schepler <ch...@apache.org> wrote:


> 
> 
> Hello,
> 
> let me clarify the title first.
> 
> In the original proposal for the connector externalization we said that
> an externalized connector has to exist in parallel with the version
> shipped in the main Flink release for 1 cycle.
> 
> For example, 1.16.0 shipped with the elasticsearch connector, but at the
> same time there's the externalized variant as a drop-in replacement, and
> the 1.17.0 release will not include a ES connector.
> 
> The rational was to give users some window to update their projects.
> 
> 
> We are now about to externalize a few more connectors (cassandra,
> pulsar, jdbc), targeting 1.16 within the next week.
> The 1.16.0 release has now been about a month ago; so it hasn't been a
> lot of time since then.
> I'm now wondering if we could/should treat these connectors as
> externalized for 1.16, meaning that we would remove them from the master
> branch now, not ship them in 1.17 and move all further development into
> the connector repos.
> 
> The main benefit is that we won't have to bother with syncing changes
> across repos all the time.
> 
> We would of course need some sort-of cutoff date for this (December
> 9th?), to ensure there's still some reasonably large gap left for users
> to migrate.
> 
> Let me know what you think.
> 
> Regards,
> Chesnay

Re: [DISCUSS] Retroactively externalize some connectors for 1.16

Posted by Danny Cranmer <da...@apache.org>.
Hello,

+1

I was thinking the same. With regard to the cut off date I would be
inclined to be more aggressive and say feature freeze for 1.17. Users do
not *need* to migrate for 1.16.

Thanks

On Thu, 1 Dec 2022, 15:01 Chesnay Schepler, <ch...@apache.org> wrote:

> Hello,
>
> let me clarify the title first.
>
> In the original proposal for the connector externalization we said that
> an externalized connector has to exist in parallel with the version
> shipped in the main Flink release for 1 cycle.
>
> For example, 1.16.0 shipped with the elasticsearch connector, but at the
> same time there's the externalized variant as a drop-in replacement, and
> the 1.17.0 release will not include a ES connector.
>
> The rational was to give users some window to update their projects.
>
>
> We are now about to externalize a few more connectors (cassandra,
> pulsar, jdbc), targeting 1.16 within the next week.
> The 1.16.0 release has now been about a month ago; so it hasn't been a
> lot of time since then.
> I'm now wondering if we could/should treat these connectors as
> externalized for 1.16, meaning that we would remove them from the master
> branch now, not ship them in 1.17 and move all further development into
> the connector repos.
>
> The main benefit is that we won't have to bother with syncing changes
> across repos all the time.
>
> We would of course need some sort-of cutoff date for this (December
> 9th?), to ensure there's still some reasonably large gap left for users
> to migrate.
>
> Let me know what you think.
>
> Regards,
> Chesnay
>
>