You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Wei-Chiu Chuang <we...@apache.org> on 2019/10/21 17:33:05 UTC

How should we do about dependency update?

Hi Hadoop developers,

I've always had this question and I don't know the answer.

For the last few months I finally spent time to deal with the vulnerability
reports from our internal dependency check tools.

Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
cherrypick the fix into all lower releases?
This is not a trivial change, and it breaks downstreams like Tez. On the
other hand, it doesn't seem reasonable if I put this fix only in trunk, and
left older releases vulnerable. What's the expectation of downstream
applications w.r.t breaking compatibility vs fixing security issues?

Thoughts?

Re: How should we do about dependency update?

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
We don't have a complete set of shaded artefacts -so it's not fair to point
at the downstream users and say it's "your own fault". We need to do more
here ourselves.


Here: it is a CVE, they should upgrade anyway. Guava is special because it
has been so brittle cross versions and so widely used by so many
applications.

On Tue, Oct 22, 2019 at 6:15 PM Wei-Chiu Chuang <we...@apache.org> wrote:

> Hi Sean,
> Thanks for the valuable feedback.
> Good point on not using dependency classes in public API parameters. One
> example is HADOOP-15502
> <https://issues.apache.org/jira/browse/HADOOP-15502> (blame
> me for breaking the API)
>
> From what I know, the biggest risk is that downstreamers include
> dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
> dependency that has a breaking change, the downstream application breaks,
> sometimes during compile time, sometimes at runtime.
>
> Jetty is probably not the best example. But take Guava as an example, when
> we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
> Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
> responsibility that they don't use shaded Hadoop client artifacts, but
> maybe we can add release note and state potential breaking changes.
>
> On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
> wrote:
>
> > speaking with my HBase hat on instead of my Hadoop hat, when the
> > Hadoop project publishes that there's a CVE but does not include a
> > maintenance release that mitigates it for a given minor release line,
> > we assume that means the Hadoop project is saying that release line is
> > EOM and should be abandoned.
> >
> > I don't know if that's an accurate interpretation in all cases.
> >
> > With my Hadoop hat on, I think downstream projects should use the
> > interfaces we say are safe to use and those interfaces should not
> > include dependencies where practical. I don't know how often a CVE
> > comes along for things like our logging API dependency, for example.
> > But downstream folks should definitely not rely on dependencies we use
> > for internal service, so I'm surprised that a version change for Jetty
> > would impact downstream.
> >
> >
> > On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> > wrote:
> > >
> > > Hi Hadoop developers,
> > >
> > > I've always had this question and I don't know the answer.
> > >
> > > For the last few months I finally spent time to deal with the
> > vulnerability
> > > reports from our internal dependency check tools.
> > >
> > > Say in HADOOP-16152 <
> https://issues.apache.org/jira/browse/HADOOP-16152>
> > > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869,
> should I
> > > cherrypick the fix into all lower releases?
> > > This is not a trivial change, and it breaks downstreams like Tez. On
> the
> > > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> > and
> > > left older releases vulnerable. What's the expectation of downstream
> > > applications w.r.t breaking compatibility vs fixing security issues?
> > >
> > > Thoughts?
> >
> >
> >
> > --
> > busbey
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: common-dev-help@hadoop.apache.org
> >
> >
>

Re: How should we do about dependency update?

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
We don't have a complete set of shaded artefacts -so it's not fair to point
at the downstream users and say it's "your own fault". We need to do more
here ourselves.


Here: it is a CVE, they should upgrade anyway. Guava is special because it
has been so brittle cross versions and so widely used by so many
applications.

On Tue, Oct 22, 2019 at 6:15 PM Wei-Chiu Chuang <we...@apache.org> wrote:

> Hi Sean,
> Thanks for the valuable feedback.
> Good point on not using dependency classes in public API parameters. One
> example is HADOOP-15502
> <https://issues.apache.org/jira/browse/HADOOP-15502> (blame
> me for breaking the API)
>
> From what I know, the biggest risk is that downstreamers include
> dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
> dependency that has a breaking change, the downstream application breaks,
> sometimes during compile time, sometimes at runtime.
>
> Jetty is probably not the best example. But take Guava as an example, when
> we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
> Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
> responsibility that they don't use shaded Hadoop client artifacts, but
> maybe we can add release note and state potential breaking changes.
>
> On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
> wrote:
>
> > speaking with my HBase hat on instead of my Hadoop hat, when the
> > Hadoop project publishes that there's a CVE but does not include a
> > maintenance release that mitigates it for a given minor release line,
> > we assume that means the Hadoop project is saying that release line is
> > EOM and should be abandoned.
> >
> > I don't know if that's an accurate interpretation in all cases.
> >
> > With my Hadoop hat on, I think downstream projects should use the
> > interfaces we say are safe to use and those interfaces should not
> > include dependencies where practical. I don't know how often a CVE
> > comes along for things like our logging API dependency, for example.
> > But downstream folks should definitely not rely on dependencies we use
> > for internal service, so I'm surprised that a version change for Jetty
> > would impact downstream.
> >
> >
> > On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> > wrote:
> > >
> > > Hi Hadoop developers,
> > >
> > > I've always had this question and I don't know the answer.
> > >
> > > For the last few months I finally spent time to deal with the
> > vulnerability
> > > reports from our internal dependency check tools.
> > >
> > > Say in HADOOP-16152 <
> https://issues.apache.org/jira/browse/HADOOP-16152>
> > > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869,
> should I
> > > cherrypick the fix into all lower releases?
> > > This is not a trivial change, and it breaks downstreams like Tez. On
> the
> > > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> > and
> > > left older releases vulnerable. What's the expectation of downstream
> > > applications w.r.t breaking compatibility vs fixing security issues?
> > >
> > > Thoughts?
> >
> >
> >
> > --
> > busbey
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: common-dev-help@hadoop.apache.org
> >
> >
>

Re: How should we do about dependency update?

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
We don't have a complete set of shaded artefacts -so it's not fair to point
at the downstream users and say it's "your own fault". We need to do more
here ourselves.


Here: it is a CVE, they should upgrade anyway. Guava is special because it
has been so brittle cross versions and so widely used by so many
applications.

On Tue, Oct 22, 2019 at 6:15 PM Wei-Chiu Chuang <we...@apache.org> wrote:

> Hi Sean,
> Thanks for the valuable feedback.
> Good point on not using dependency classes in public API parameters. One
> example is HADOOP-15502
> <https://issues.apache.org/jira/browse/HADOOP-15502> (blame
> me for breaking the API)
>
> From what I know, the biggest risk is that downstreamers include
> dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
> dependency that has a breaking change, the downstream application breaks,
> sometimes during compile time, sometimes at runtime.
>
> Jetty is probably not the best example. But take Guava as an example, when
> we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
> Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
> responsibility that they don't use shaded Hadoop client artifacts, but
> maybe we can add release note and state potential breaking changes.
>
> On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
> wrote:
>
> > speaking with my HBase hat on instead of my Hadoop hat, when the
> > Hadoop project publishes that there's a CVE but does not include a
> > maintenance release that mitigates it for a given minor release line,
> > we assume that means the Hadoop project is saying that release line is
> > EOM and should be abandoned.
> >
> > I don't know if that's an accurate interpretation in all cases.
> >
> > With my Hadoop hat on, I think downstream projects should use the
> > interfaces we say are safe to use and those interfaces should not
> > include dependencies where practical. I don't know how often a CVE
> > comes along for things like our logging API dependency, for example.
> > But downstream folks should definitely not rely on dependencies we use
> > for internal service, so I'm surprised that a version change for Jetty
> > would impact downstream.
> >
> >
> > On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> > wrote:
> > >
> > > Hi Hadoop developers,
> > >
> > > I've always had this question and I don't know the answer.
> > >
> > > For the last few months I finally spent time to deal with the
> > vulnerability
> > > reports from our internal dependency check tools.
> > >
> > > Say in HADOOP-16152 <
> https://issues.apache.org/jira/browse/HADOOP-16152>
> > > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869,
> should I
> > > cherrypick the fix into all lower releases?
> > > This is not a trivial change, and it breaks downstreams like Tez. On
> the
> > > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> > and
> > > left older releases vulnerable. What's the expectation of downstream
> > > applications w.r.t breaking compatibility vs fixing security issues?
> > >
> > > Thoughts?
> >
> >
> >
> > --
> > busbey
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: common-dev-help@hadoop.apache.org
> >
> >
>

Re: How should we do about dependency update?

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
We don't have a complete set of shaded artefacts -so it's not fair to point
at the downstream users and say it's "your own fault". We need to do more
here ourselves.


Here: it is a CVE, they should upgrade anyway. Guava is special because it
has been so brittle cross versions and so widely used by so many
applications.

On Tue, Oct 22, 2019 at 6:15 PM Wei-Chiu Chuang <we...@apache.org> wrote:

> Hi Sean,
> Thanks for the valuable feedback.
> Good point on not using dependency classes in public API parameters. One
> example is HADOOP-15502
> <https://issues.apache.org/jira/browse/HADOOP-15502> (blame
> me for breaking the API)
>
> From what I know, the biggest risk is that downstreamers include
> dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
> dependency that has a breaking change, the downstream application breaks,
> sometimes during compile time, sometimes at runtime.
>
> Jetty is probably not the best example. But take Guava as an example, when
> we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
> Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
> responsibility that they don't use shaded Hadoop client artifacts, but
> maybe we can add release note and state potential breaking changes.
>
> On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
> wrote:
>
> > speaking with my HBase hat on instead of my Hadoop hat, when the
> > Hadoop project publishes that there's a CVE but does not include a
> > maintenance release that mitigates it for a given minor release line,
> > we assume that means the Hadoop project is saying that release line is
> > EOM and should be abandoned.
> >
> > I don't know if that's an accurate interpretation in all cases.
> >
> > With my Hadoop hat on, I think downstream projects should use the
> > interfaces we say are safe to use and those interfaces should not
> > include dependencies where practical. I don't know how often a CVE
> > comes along for things like our logging API dependency, for example.
> > But downstream folks should definitely not rely on dependencies we use
> > for internal service, so I'm surprised that a version change for Jetty
> > would impact downstream.
> >
> >
> > On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> > wrote:
> > >
> > > Hi Hadoop developers,
> > >
> > > I've always had this question and I don't know the answer.
> > >
> > > For the last few months I finally spent time to deal with the
> > vulnerability
> > > reports from our internal dependency check tools.
> > >
> > > Say in HADOOP-16152 <
> https://issues.apache.org/jira/browse/HADOOP-16152>
> > > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869,
> should I
> > > cherrypick the fix into all lower releases?
> > > This is not a trivial change, and it breaks downstreams like Tez. On
> the
> > > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> > and
> > > left older releases vulnerable. What's the expectation of downstream
> > > applications w.r.t breaking compatibility vs fixing security issues?
> > >
> > > Thoughts?
> >
> >
> >
> > --
> > busbey
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: common-dev-help@hadoop.apache.org
> >
> >
>

Re: How should we do about dependency update?

Posted by Wei-Chiu Chuang <we...@apache.org>.
Hi Sean,
Thanks for the valuable feedback.
Good point on not using dependency classes in public API parameters. One
example is HADOOP-15502
<https://issues.apache.org/jira/browse/HADOOP-15502> (blame
me for breaking the API)

From what I know, the biggest risk is that downstreamers include
dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
dependency that has a breaking change, the downstream application breaks,
sometimes during compile time, sometimes at runtime.

Jetty is probably not the best example. But take Guava as an example, when
we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
responsibility that they don't use shaded Hadoop client artifacts, but
maybe we can add release note and state potential breaking changes.

On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
wrote:

> speaking with my HBase hat on instead of my Hadoop hat, when the
> Hadoop project publishes that there's a CVE but does not include a
> maintenance release that mitigates it for a given minor release line,
> we assume that means the Hadoop project is saying that release line is
> EOM and should be abandoned.
>
> I don't know if that's an accurate interpretation in all cases.
>
> With my Hadoop hat on, I think downstream projects should use the
> interfaces we say are safe to use and those interfaces should not
> include dependencies where practical. I don't know how often a CVE
> comes along for things like our logging API dependency, for example.
> But downstream folks should definitely not rely on dependencies we use
> for internal service, so I'm surprised that a version change for Jetty
> would impact downstream.
>
>
> On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> wrote:
> >
> > Hi Hadoop developers,
> >
> > I've always had this question and I don't know the answer.
> >
> > For the last few months I finally spent time to deal with the
> vulnerability
> > reports from our internal dependency check tools.
> >
> > Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> > cherrypick the fix into all lower releases?
> > This is not a trivial change, and it breaks downstreams like Tez. On the
> > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> and
> > left older releases vulnerable. What's the expectation of downstream
> > applications w.r.t breaking compatibility vs fixing security issues?
> >
> > Thoughts?
>
>
>
> --
> busbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: common-dev-help@hadoop.apache.org
>
>

Re: How should we do about dependency update?

Posted by Wei-Chiu Chuang <we...@apache.org>.
Hi Sean,
Thanks for the valuable feedback.
Good point on not using dependency classes in public API parameters. One
example is HADOOP-15502
<https://issues.apache.org/jira/browse/HADOOP-15502> (blame
me for breaking the API)

From what I know, the biggest risk is that downstreamers include
dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
dependency that has a breaking change, the downstream application breaks,
sometimes during compile time, sometimes at runtime.

Jetty is probably not the best example. But take Guava as an example, when
we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
responsibility that they don't use shaded Hadoop client artifacts, but
maybe we can add release note and state potential breaking changes.

On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
wrote:

> speaking with my HBase hat on instead of my Hadoop hat, when the
> Hadoop project publishes that there's a CVE but does not include a
> maintenance release that mitigates it for a given minor release line,
> we assume that means the Hadoop project is saying that release line is
> EOM and should be abandoned.
>
> I don't know if that's an accurate interpretation in all cases.
>
> With my Hadoop hat on, I think downstream projects should use the
> interfaces we say are safe to use and those interfaces should not
> include dependencies where practical. I don't know how often a CVE
> comes along for things like our logging API dependency, for example.
> But downstream folks should definitely not rely on dependencies we use
> for internal service, so I'm surprised that a version change for Jetty
> would impact downstream.
>
>
> On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> wrote:
> >
> > Hi Hadoop developers,
> >
> > I've always had this question and I don't know the answer.
> >
> > For the last few months I finally spent time to deal with the
> vulnerability
> > reports from our internal dependency check tools.
> >
> > Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> > cherrypick the fix into all lower releases?
> > This is not a trivial change, and it breaks downstreams like Tez. On the
> > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> and
> > left older releases vulnerable. What's the expectation of downstream
> > applications w.r.t breaking compatibility vs fixing security issues?
> >
> > Thoughts?
>
>
>
> --
> busbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: common-dev-help@hadoop.apache.org
>
>

Re: How should we do about dependency update?

Posted by Wei-Chiu Chuang <we...@apache.org>.
Hi Sean,
Thanks for the valuable feedback.
Good point on not using dependency classes in public API parameters. One
example is HADOOP-15502
<https://issues.apache.org/jira/browse/HADOOP-15502> (blame
me for breaking the API)

From what I know, the biggest risk is that downstreamers include
dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
dependency that has a breaking change, the downstream application breaks,
sometimes during compile time, sometimes at runtime.

Jetty is probably not the best example. But take Guava as an example, when
we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
responsibility that they don't use shaded Hadoop client artifacts, but
maybe we can add release note and state potential breaking changes.

On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
wrote:

> speaking with my HBase hat on instead of my Hadoop hat, when the
> Hadoop project publishes that there's a CVE but does not include a
> maintenance release that mitigates it for a given minor release line,
> we assume that means the Hadoop project is saying that release line is
> EOM and should be abandoned.
>
> I don't know if that's an accurate interpretation in all cases.
>
> With my Hadoop hat on, I think downstream projects should use the
> interfaces we say are safe to use and those interfaces should not
> include dependencies where practical. I don't know how often a CVE
> comes along for things like our logging API dependency, for example.
> But downstream folks should definitely not rely on dependencies we use
> for internal service, so I'm surprised that a version change for Jetty
> would impact downstream.
>
>
> On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> wrote:
> >
> > Hi Hadoop developers,
> >
> > I've always had this question and I don't know the answer.
> >
> > For the last few months I finally spent time to deal with the
> vulnerability
> > reports from our internal dependency check tools.
> >
> > Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> > cherrypick the fix into all lower releases?
> > This is not a trivial change, and it breaks downstreams like Tez. On the
> > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> and
> > left older releases vulnerable. What's the expectation of downstream
> > applications w.r.t breaking compatibility vs fixing security issues?
> >
> > Thoughts?
>
>
>
> --
> busbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: common-dev-help@hadoop.apache.org
>
>

Re: How should we do about dependency update?

Posted by Wei-Chiu Chuang <we...@apache.org>.
Hi Sean,
Thanks for the valuable feedback.
Good point on not using dependency classes in public API parameters. One
example is HADOOP-15502
<https://issues.apache.org/jira/browse/HADOOP-15502> (blame
me for breaking the API)

From what I know, the biggest risk is that downstreamers include
dependencies from Hadoop implicitly. Therefore, if Hadoop updates a
dependency that has a breaking change, the downstream application breaks,
sometimes during compile time, sometimes at runtime.

Jetty is probably not the best example. But take Guava as an example, when
we updated Guava from 11.0 to 27.0, it breaks downstreamers like crazy --
Hive, Tez, Pheonix, Oozie all have to make changes. Probably not Hadoop's
responsibility that they don't use shaded Hadoop client artifacts, but
maybe we can add release note and state potential breaking changes.

On Tue, Oct 22, 2019 at 7:43 AM Sean Busbey <bu...@cloudera.com.invalid>
wrote:

> speaking with my HBase hat on instead of my Hadoop hat, when the
> Hadoop project publishes that there's a CVE but does not include a
> maintenance release that mitigates it for a given minor release line,
> we assume that means the Hadoop project is saying that release line is
> EOM and should be abandoned.
>
> I don't know if that's an accurate interpretation in all cases.
>
> With my Hadoop hat on, I think downstream projects should use the
> interfaces we say are safe to use and those interfaces should not
> include dependencies where practical. I don't know how often a CVE
> comes along for things like our logging API dependency, for example.
> But downstream folks should definitely not rely on dependencies we use
> for internal service, so I'm surprised that a version change for Jetty
> would impact downstream.
>
>
> On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org>
> wrote:
> >
> > Hi Hadoop developers,
> >
> > I've always had this question and I don't know the answer.
> >
> > For the last few months I finally spent time to deal with the
> vulnerability
> > reports from our internal dependency check tools.
> >
> > Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> > we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> > cherrypick the fix into all lower releases?
> > This is not a trivial change, and it breaks downstreams like Tez. On the
> > other hand, it doesn't seem reasonable if I put this fix only in trunk,
> and
> > left older releases vulnerable. What's the expectation of downstream
> > applications w.r.t breaking compatibility vs fixing security issues?
> >
> > Thoughts?
>
>
>
> --
> busbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: common-dev-help@hadoop.apache.org
>
>

Re: How should we do about dependency update?

Posted by Sean Busbey <bu...@cloudera.com.INVALID>.
speaking with my HBase hat on instead of my Hadoop hat, when the
Hadoop project publishes that there's a CVE but does not include a
maintenance release that mitigates it for a given minor release line,
we assume that means the Hadoop project is saying that release line is
EOM and should be abandoned.

I don't know if that's an accurate interpretation in all cases.

With my Hadoop hat on, I think downstream projects should use the
interfaces we say are safe to use and those interfaces should not
include dependencies where practical. I don't know how often a CVE
comes along for things like our logging API dependency, for example.
But downstream folks should definitely not rely on dependencies we use
for internal service, so I'm surprised that a version change for Jetty
would impact downstream.


On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org> wrote:
>
> Hi Hadoop developers,
>
> I've always had this question and I don't know the answer.
>
> For the last few months I finally spent time to deal with the vulnerability
> reports from our internal dependency check tools.
>
> Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> cherrypick the fix into all lower releases?
> This is not a trivial change, and it breaks downstreams like Tez. On the
> other hand, it doesn't seem reasonable if I put this fix only in trunk, and
> left older releases vulnerable. What's the expectation of downstream
> applications w.r.t breaking compatibility vs fixing security issues?
>
> Thoughts?



--
busbey

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org


Re: How should we do about dependency update?

Posted by Sean Busbey <bu...@cloudera.com.INVALID>.
speaking with my HBase hat on instead of my Hadoop hat, when the
Hadoop project publishes that there's a CVE but does not include a
maintenance release that mitigates it for a given minor release line,
we assume that means the Hadoop project is saying that release line is
EOM and should be abandoned.

I don't know if that's an accurate interpretation in all cases.

With my Hadoop hat on, I think downstream projects should use the
interfaces we say are safe to use and those interfaces should not
include dependencies where practical. I don't know how often a CVE
comes along for things like our logging API dependency, for example.
But downstream folks should definitely not rely on dependencies we use
for internal service, so I'm surprised that a version change for Jetty
would impact downstream.


On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org> wrote:
>
> Hi Hadoop developers,
>
> I've always had this question and I don't know the answer.
>
> For the last few months I finally spent time to deal with the vulnerability
> reports from our internal dependency check tools.
>
> Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> cherrypick the fix into all lower releases?
> This is not a trivial change, and it breaks downstreams like Tez. On the
> other hand, it doesn't seem reasonable if I put this fix only in trunk, and
> left older releases vulnerable. What's the expectation of downstream
> applications w.r.t breaking compatibility vs fixing security issues?
>
> Thoughts?



--
busbey

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Re: How should we do about dependency update?

Posted by Sean Busbey <bu...@cloudera.com.INVALID>.
speaking with my HBase hat on instead of my Hadoop hat, when the
Hadoop project publishes that there's a CVE but does not include a
maintenance release that mitigates it for a given minor release line,
we assume that means the Hadoop project is saying that release line is
EOM and should be abandoned.

I don't know if that's an accurate interpretation in all cases.

With my Hadoop hat on, I think downstream projects should use the
interfaces we say are safe to use and those interfaces should not
include dependencies where practical. I don't know how often a CVE
comes along for things like our logging API dependency, for example.
But downstream folks should definitely not rely on dependencies we use
for internal service, so I'm surprised that a version change for Jetty
would impact downstream.


On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org> wrote:
>
> Hi Hadoop developers,
>
> I've always had this question and I don't know the answer.
>
> For the last few months I finally spent time to deal with the vulnerability
> reports from our internal dependency check tools.
>
> Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> cherrypick the fix into all lower releases?
> This is not a trivial change, and it breaks downstreams like Tez. On the
> other hand, it doesn't seem reasonable if I put this fix only in trunk, and
> left older releases vulnerable. What's the expectation of downstream
> applications w.r.t breaking compatibility vs fixing security issues?
>
> Thoughts?



--
busbey

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-dev-help@hadoop.apache.org


Re: How should we do about dependency update?

Posted by Sean Busbey <bu...@cloudera.com.INVALID>.
speaking with my HBase hat on instead of my Hadoop hat, when the
Hadoop project publishes that there's a CVE but does not include a
maintenance release that mitigates it for a given minor release line,
we assume that means the Hadoop project is saying that release line is
EOM and should be abandoned.

I don't know if that's an accurate interpretation in all cases.

With my Hadoop hat on, I think downstream projects should use the
interfaces we say are safe to use and those interfaces should not
include dependencies where practical. I don't know how often a CVE
comes along for things like our logging API dependency, for example.
But downstream folks should definitely not rely on dependencies we use
for internal service, so I'm surprised that a version change for Jetty
would impact downstream.


On Mon, Oct 21, 2019 at 12:33 PM Wei-Chiu Chuang <we...@apache.org> wrote:
>
> Hi Hadoop developers,
>
> I've always had this question and I don't know the answer.
>
> For the last few months I finally spent time to deal with the vulnerability
> reports from our internal dependency check tools.
>
> Say in HADOOP-16152 <https://issues.apache.org/jira/browse/HADOOP-16152>
> we update Jetty from 9.3.27 to 9.4.20 because of CVE-2019-16869, should I
> cherrypick the fix into all lower releases?
> This is not a trivial change, and it breaks downstreams like Tez. On the
> other hand, it doesn't seem reasonable if I put this fix only in trunk, and
> left older releases vulnerable. What's the expectation of downstream
> applications w.r.t breaking compatibility vs fixing security issues?
>
> Thoughts?



--
busbey

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org