You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Chao Sun <su...@apache.org> on 2022/01/19 17:50:16 UTC

[VOTE] Release Apache Hadoop 3.3.2 - RC2

Hi all,

I've put together Hadoop 3.3.2 RC2 below:

The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
The RC tag is at:
https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
The Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1332

You can find my public key at:
https://downloads.apache.org/hadoop/common/KEYS

I've done the following tests and they look good:
- Ran all the unit tests
- Started a single node HDFS cluster and tested a few simple commands
- Ran all the tests in Spark using the RC2 artifacts

Please evaluate the RC and vote, thanks!

Best,
Chao

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
thanks, i'm on it..will run the aws and azure tests and then play with the
artifacts

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all!! I'll prepare RC3 now including HADOOP-18094
<https://issues.apache.org/jira/browse/HADOOP-18094> and will start a new
vote soon.

Best,
Chao

On Tue, Jan 25, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> that error
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet
>
> implies maven is not downloading http artifacts, and it had decided that
> the reslet artifacts were coming off an http repo, even though its in maven
> central
>
> which means look at your global maven settings
>
>
>
> On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
> <mt...@cloudera.com.invalid> wrote:
>
> > Hi Chao,
> > I was using the command "mvn package -Pdist -DskipTests -Dtar
> > -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> > It is working for me today. So maybe it was an intermittent issue in my
> > local last time when I was trying this. So we can ignore this. Thanks
> >
> >
> >
> > On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
> >
> > > +1 (binding)
> > >
> > >         * Signature: ok
> > >         * Checksum : ok
> > >         * Rat check (1.8.0_191): ok
> > >          - mvn clean apache-rat:check
> > >         * Built from source (1.8.0_191): ok
> > >          - mvn clean install  -DskipTests
> > >
> > > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > > Checked a few links work.
> > >
> > > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> > w/
> > > chaos. Worked like 3.3.1...
> > >
> > > I tried to build with 3.8.1 maven and got the below.
> > >
> > > [ERROR] Failed to execute goal on project
> > > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> > for
> > > project
> > > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> > Failed
> > > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact
> descriptor
> > > for org.restlet.
> > > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > > org.restlet.jee:org.restlet:pom:2.3.0 from/to
> maven-default-http-blocker
> > (
> > > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > > http://maven.restlet.org, default, releases+snapshots),
> > apache.snapshots (
> > > http://repository.apache.org/snapshots, default, disabled)] -> [Help
> 1]
> > >
> > > I used 3.6.3 mvn instead (looks like a simple fix).
> > >
> > > Thanks for packaging up this fat point release Chao Sun.
> > >
> > > S
> > >
> > > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all!! I'll prepare RC3 now including HADOOP-18094
<https://issues.apache.org/jira/browse/HADOOP-18094> and will start a new
vote soon.

Best,
Chao

On Tue, Jan 25, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> that error
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet
>
> implies maven is not downloading http artifacts, and it had decided that
> the reslet artifacts were coming off an http repo, even though its in maven
> central
>
> which means look at your global maven settings
>
>
>
> On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
> <mt...@cloudera.com.invalid> wrote:
>
> > Hi Chao,
> > I was using the command "mvn package -Pdist -DskipTests -Dtar
> > -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> > It is working for me today. So maybe it was an intermittent issue in my
> > local last time when I was trying this. So we can ignore this. Thanks
> >
> >
> >
> > On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
> >
> > > +1 (binding)
> > >
> > >         * Signature: ok
> > >         * Checksum : ok
> > >         * Rat check (1.8.0_191): ok
> > >          - mvn clean apache-rat:check
> > >         * Built from source (1.8.0_191): ok
> > >          - mvn clean install  -DskipTests
> > >
> > > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > > Checked a few links work.
> > >
> > > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> > w/
> > > chaos. Worked like 3.3.1...
> > >
> > > I tried to build with 3.8.1 maven and got the below.
> > >
> > > [ERROR] Failed to execute goal on project
> > > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> > for
> > > project
> > > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> > Failed
> > > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact
> descriptor
> > > for org.restlet.
> > > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > > org.restlet.jee:org.restlet:pom:2.3.0 from/to
> maven-default-http-blocker
> > (
> > > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > > http://maven.restlet.org, default, releases+snapshots),
> > apache.snapshots (
> > > http://repository.apache.org/snapshots, default, disabled)] -> [Help
> 1]
> > >
> > > I used 3.6.3 mvn instead (looks like a simple fix).
> > >
> > > Thanks for packaging up this fat point release Chao Sun.
> > >
> > > S
> > >
> > > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all!! I'll prepare RC3 now including HADOOP-18094
<https://issues.apache.org/jira/browse/HADOOP-18094> and will start a new
vote soon.

Best,
Chao

On Tue, Jan 25, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> that error
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet
>
> implies maven is not downloading http artifacts, and it had decided that
> the reslet artifacts were coming off an http repo, even though its in maven
> central
>
> which means look at your global maven settings
>
>
>
> On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
> <mt...@cloudera.com.invalid> wrote:
>
> > Hi Chao,
> > I was using the command "mvn package -Pdist -DskipTests -Dtar
> > -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> > It is working for me today. So maybe it was an intermittent issue in my
> > local last time when I was trying this. So we can ignore this. Thanks
> >
> >
> >
> > On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
> >
> > > +1 (binding)
> > >
> > >         * Signature: ok
> > >         * Checksum : ok
> > >         * Rat check (1.8.0_191): ok
> > >          - mvn clean apache-rat:check
> > >         * Built from source (1.8.0_191): ok
> > >          - mvn clean install  -DskipTests
> > >
> > > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > > Checked a few links work.
> > >
> > > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> > w/
> > > chaos. Worked like 3.3.1...
> > >
> > > I tried to build with 3.8.1 maven and got the below.
> > >
> > > [ERROR] Failed to execute goal on project
> > > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> > for
> > > project
> > > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> > Failed
> > > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact
> descriptor
> > > for org.restlet.
> > > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > > org.restlet.jee:org.restlet:pom:2.3.0 from/to
> maven-default-http-blocker
> > (
> > > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > > http://maven.restlet.org, default, releases+snapshots),
> > apache.snapshots (
> > > http://repository.apache.org/snapshots, default, disabled)] -> [Help
> 1]
> > >
> > > I used 3.6.3 mvn instead (looks like a simple fix).
> > >
> > > Thanks for packaging up this fat point release Chao Sun.
> > >
> > > S
> > >
> > > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all!! I'll prepare RC3 now including HADOOP-18094
<https://issues.apache.org/jira/browse/HADOOP-18094> and will start a new
vote soon.

Best,
Chao

On Tue, Jan 25, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> that error
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet
>
> implies maven is not downloading http artifacts, and it had decided that
> the reslet artifacts were coming off an http repo, even though its in maven
> central
>
> which means look at your global maven settings
>
>
>
> On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
> <mt...@cloudera.com.invalid> wrote:
>
> > Hi Chao,
> > I was using the command "mvn package -Pdist -DskipTests -Dtar
> > -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> > It is working for me today. So maybe it was an intermittent issue in my
> > local last time when I was trying this. So we can ignore this. Thanks
> >
> >
> >
> > On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
> >
> > > +1 (binding)
> > >
> > >         * Signature: ok
> > >         * Checksum : ok
> > >         * Rat check (1.8.0_191): ok
> > >          - mvn clean apache-rat:check
> > >         * Built from source (1.8.0_191): ok
> > >          - mvn clean install  -DskipTests
> > >
> > > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > > Checked a few links work.
> > >
> > > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> > w/
> > > chaos. Worked like 3.3.1...
> > >
> > > I tried to build with 3.8.1 maven and got the below.
> > >
> > > [ERROR] Failed to execute goal on project
> > > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> > for
> > > project
> > > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> > Failed
> > > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact
> descriptor
> > > for org.restlet.
> > > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > > org.restlet.jee:org.restlet:pom:2.3.0 from/to
> maven-default-http-blocker
> > (
> > > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > > http://maven.restlet.org, default, releases+snapshots),
> > apache.snapshots (
> > > http://repository.apache.org/snapshots, default, disabled)] -> [Help
> 1]
> > >
> > > I used 3.6.3 mvn instead (looks like a simple fix).
> > >
> > > Thanks for packaging up this fat point release Chao Sun.
> > >
> > > S
> > >
> > > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
that error
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet

implies maven is not downloading http artifacts, and it had decided that
the reslet artifacts were coming off an http repo, even though its in maven
central

which means look at your global maven settings



On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Hi Chao,
> I was using the command "mvn package -Pdist -DskipTests -Dtar
> -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> It is working for me today. So maybe it was an intermittent issue in my
> local last time when I was trying this. So we can ignore this. Thanks
>
>
>
> On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
>
> > +1 (binding)
> >
> >         * Signature: ok
> >         * Checksum : ok
> >         * Rat check (1.8.0_191): ok
> >          - mvn clean apache-rat:check
> >         * Built from source (1.8.0_191): ok
> >          - mvn clean install  -DskipTests
> >
> > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > Checked a few links work.
> >
> > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> w/
> > chaos. Worked like 3.3.1...
> >
> > I tried to build with 3.8.1 maven and got the below.
> >
> > [ERROR] Failed to execute goal on project
> > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> for
> > project
> > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> Failed
> > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> > for org.restlet.
> > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker
> (
> > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > http://maven.restlet.org, default, releases+snapshots),
> apache.snapshots (
> > http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> >
> > I used 3.6.3 mvn instead (looks like a simple fix).
> >
> > Thanks for packaging up this fat point release Chao Sun.
> >
> > S
> >
> > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
that error
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet

implies maven is not downloading http artifacts, and it had decided that
the reslet artifacts were coming off an http repo, even though its in maven
central

which means look at your global maven settings



On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Hi Chao,
> I was using the command "mvn package -Pdist -DskipTests -Dtar
> -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> It is working for me today. So maybe it was an intermittent issue in my
> local last time when I was trying this. So we can ignore this. Thanks
>
>
>
> On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
>
> > +1 (binding)
> >
> >         * Signature: ok
> >         * Checksum : ok
> >         * Rat check (1.8.0_191): ok
> >          - mvn clean apache-rat:check
> >         * Built from source (1.8.0_191): ok
> >          - mvn clean install  -DskipTests
> >
> > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > Checked a few links work.
> >
> > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> w/
> > chaos. Worked like 3.3.1...
> >
> > I tried to build with 3.8.1 maven and got the below.
> >
> > [ERROR] Failed to execute goal on project
> > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> for
> > project
> > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> Failed
> > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> > for org.restlet.
> > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker
> (
> > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > http://maven.restlet.org, default, releases+snapshots),
> apache.snapshots (
> > http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> >
> > I used 3.6.3 mvn instead (looks like a simple fix).
> >
> > Thanks for packaging up this fat point release Chao Sun.
> >
> > S
> >
> > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
that error
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet

implies maven is not downloading http artifacts, and it had decided that
the reslet artifacts were coming off an http repo, even though its in maven
central

which means look at your global maven settings



On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Hi Chao,
> I was using the command "mvn package -Pdist -DskipTests -Dtar
> -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> It is working for me today. So maybe it was an intermittent issue in my
> local last time when I was trying this. So we can ignore this. Thanks
>
>
>
> On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
>
> > +1 (binding)
> >
> >         * Signature: ok
> >         * Checksum : ok
> >         * Rat check (1.8.0_191): ok
> >          - mvn clean apache-rat:check
> >         * Built from source (1.8.0_191): ok
> >          - mvn clean install  -DskipTests
> >
> > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > Checked a few links work.
> >
> > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> w/
> > chaos. Worked like 3.3.1...
> >
> > I tried to build with 3.8.1 maven and got the below.
> >
> > [ERROR] Failed to execute goal on project
> > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> for
> > project
> > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> Failed
> > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> > for org.restlet.
> > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker
> (
> > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > http://maven.restlet.org, default, releases+snapshots),
> apache.snapshots (
> > http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> >
> > I used 3.6.3 mvn instead (looks like a simple fix).
> >
> > Thanks for packaging up this fat point release Chao Sun.
> >
> > S
> >
> > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
that error
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet

implies maven is not downloading http artifacts, and it had decided that
the reslet artifacts were coming off an http repo, even though its in maven
central

which means look at your global maven settings



On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Hi Chao,
> I was using the command "mvn package -Pdist -DskipTests -Dtar
> -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> It is working for me today. So maybe it was an intermittent issue in my
> local last time when I was trying this. So we can ignore this. Thanks
>
>
>
> On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:
>
> > +1 (binding)
> >
> >         * Signature: ok
> >         * Checksum : ok
> >         * Rat check (1.8.0_191): ok
> >          - mvn clean apache-rat:check
> >         * Built from source (1.8.0_191): ok
> >          - mvn clean install  -DskipTests
> >
> > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > Checked a few links work.
> >
> > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> w/
> > chaos. Worked like 3.3.1...
> >
> > I tried to build with 3.8.1 maven and got the below.
> >
> > [ERROR] Failed to execute goal on project
> > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> for
> > project
> > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> Failed
> > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> > for org.restlet.
> > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker
> (
> > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > http://maven.restlet.org, default, releases+snapshots),
> apache.snapshots (
> > http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> >
> > I used 3.6.3 mvn instead (looks like a simple fix).
> >
> > Thanks for packaging up this fat point release Chao Sun.
> >
> > S
> >
> > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Hi Chao,
I was using the command "mvn package -Pdist -DskipTests -Dtar
-Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
It is working for me today. So maybe it was an intermittent issue in my
local last time when I was trying this. So we can ignore this. Thanks



On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:

> +1 (binding)
>
>         * Signature: ok
>         * Checksum : ok
>         * Rat check (1.8.0_191): ok
>          - mvn clean apache-rat:check
>         * Built from source (1.8.0_191): ok
>          - mvn clean install  -DskipTests
>
> Poking around in the binary, it looks good. Unpacked site. Looks right.
> Checked a few links work.
>
> Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
> chaos. Worked like 3.3.1...
>
> I tried to build with 3.8.1 maven and got the below.
>
> [ERROR] Failed to execute goal on project
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
> project
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
> to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> for org.restlet.
> jee:org.restlet:jar:2.3.0: Could not transfer artifact
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
> http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
>
> I used 3.6.3 mvn instead (looks like a simple fix).
>
> Thanks for packaging up this fat point release Chao Sun.
>
> S
>
> On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Hi Chao,
I was using the command "mvn package -Pdist -DskipTests -Dtar
-Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
It is working for me today. So maybe it was an intermittent issue in my
local last time when I was trying this. So we can ignore this. Thanks



On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:

> +1 (binding)
>
>         * Signature: ok
>         * Checksum : ok
>         * Rat check (1.8.0_191): ok
>          - mvn clean apache-rat:check
>         * Built from source (1.8.0_191): ok
>          - mvn clean install  -DskipTests
>
> Poking around in the binary, it looks good. Unpacked site. Looks right.
> Checked a few links work.
>
> Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
> chaos. Worked like 3.3.1...
>
> I tried to build with 3.8.1 maven and got the below.
>
> [ERROR] Failed to execute goal on project
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
> project
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
> to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> for org.restlet.
> jee:org.restlet:jar:2.3.0: Could not transfer artifact
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
> http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
>
> I used 3.6.3 mvn instead (looks like a simple fix).
>
> Thanks for packaging up this fat point release Chao Sun.
>
> S
>
> On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Hi Chao,
I was using the command "mvn package -Pdist -DskipTests -Dtar
-Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
It is working for me today. So maybe it was an intermittent issue in my
local last time when I was trying this. So we can ignore this. Thanks



On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:

> +1 (binding)
>
>         * Signature: ok
>         * Checksum : ok
>         * Rat check (1.8.0_191): ok
>          - mvn clean apache-rat:check
>         * Built from source (1.8.0_191): ok
>          - mvn clean install  -DskipTests
>
> Poking around in the binary, it looks good. Unpacked site. Looks right.
> Checked a few links work.
>
> Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
> chaos. Worked like 3.3.1...
>
> I tried to build with 3.8.1 maven and got the below.
>
> [ERROR] Failed to execute goal on project
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
> project
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
> to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> for org.restlet.
> jee:org.restlet:jar:2.3.0: Could not transfer artifact
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
> http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
>
> I used 3.6.3 mvn instead (looks like a simple fix).
>
> Thanks for packaging up this fat point release Chao Sun.
>
> S
>
> On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Hi Chao,
I was using the command "mvn package -Pdist -DskipTests -Dtar
-Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
It is working for me today. So maybe it was an intermittent issue in my
local last time when I was trying this. So we can ignore this. Thanks



On Tue, Jan 25, 2022 at 6:21 AM Stack <st...@duboce.net> wrote:

> +1 (binding)
>
>         * Signature: ok
>         * Checksum : ok
>         * Rat check (1.8.0_191): ok
>          - mvn clean apache-rat:check
>         * Built from source (1.8.0_191): ok
>          - mvn clean install  -DskipTests
>
> Poking around in the binary, it looks good. Unpacked site. Looks right.
> Checked a few links work.
>
> Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
> chaos. Worked like 3.3.1...
>
> I tried to build with 3.8.1 maven and got the below.
>
> [ERROR] Failed to execute goal on project
> hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
> project
> org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
> to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> for org.restlet.
> jee:org.restlet:jar:2.3.0: Could not transfer artifact
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
> http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
>
> I used 3.6.3 mvn instead (looks like a simple fix).
>
> Thanks for packaging up this fat point release Chao Sun.
>
> S
>
> On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Stack <st...@duboce.net>.
+1 (binding)

        * Signature: ok
        * Checksum : ok
        * Rat check (1.8.0_191): ok
         - mvn clean apache-rat:check
        * Built from source (1.8.0_191): ok
         - mvn clean install  -DskipTests

Poking around in the binary, it looks good. Unpacked site. Looks right.
Checked a few links work.

Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
chaos. Worked like 3.3.1...

I tried to build with 3.8.1 maven and got the below.

[ERROR] Failed to execute goal on project
hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
project
org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
for org.restlet.
jee:org.restlet:jar:2.3.0: Could not transfer artifact
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]

I used 3.6.3 mvn instead (looks like a simple fix).

Thanks for packaging up this fat point release Chao Sun.

S

On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
thanks, i'm on it..will run the aws and azure tests and then play with the
artifacts

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Stack <st...@duboce.net>.
+1 (binding)

        * Signature: ok
        * Checksum : ok
        * Rat check (1.8.0_191): ok
         - mvn clean apache-rat:check
        * Built from source (1.8.0_191): ok
         - mvn clean install  -DskipTests

Poking around in the binary, it looks good. Unpacked site. Looks right.
Checked a few links work.

Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
chaos. Worked like 3.3.1...

I tried to build with 3.8.1 maven and got the below.

[ERROR] Failed to execute goal on project
hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
project
org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
for org.restlet.
jee:org.restlet:jar:2.3.0: Could not transfer artifact
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]

I used 3.6.3 mvn instead (looks like a simple fix).

Thanks for packaging up this fat point release Chao Sun.

S

On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Hmm interesting. Let me check on this error. Thanks Mukund.

Chao

On Fri, Jan 21, 2022 at 4:42 AM Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Checked out the release tag. commit *6da346a358c *
> Seeing below error while compiling :
>
> Duplicate classes found:
>
>
>   Found in:
>
>     org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile
>
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile
>
>   Duplicate classes:
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$1.class
>
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*
>
> [*INFO*]
>
> [*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
> [02:17 min]
>
> [*INFO*] Apache Hadoop Client Packaging Invariants for Test .
> *FAILURE* [  0.221
> s]
>
> [*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*
>
> [*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *BUILD FAILURE*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] Total time:  02:18 min
>
> [*INFO*] Finished at: 2022-01-21T18:06:11+05:30
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*ERROR*] Failed to execute goal
> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
> *(enforce-banned-dependencies)* on project
> hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
> above for specific messages explaining why the rule failed.* -> *[Help 1]*
>
>
> On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org>
> wrote:
>
> > I'll find time to check out the RC bits.
> > I just feel bad that the tarball is now more than 600MB in size.
> >
> > On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran
> <stevel@cloudera.com.invalid
> > >
> > wrote:
> >
> > > *+1 binding.*
> > >
> > > reviewed binaries, source, artifacts in the staging maven repository in
> > > downstream builds. all good.
> > >
> > > *## test run*
> > >
> > > checked out the asf github repo at commit 6da346a358c into a location
> > > already set up with aws and azure test credentials
> > >
> > > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> > >  -Dmarkers=delete -Dscale
> > > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > > -DtestsThreadCount=6
> > >
> > > all happy
> > >
> > >
> > >
> > > *## binary*
> > > downloaded KEYS and imported, so adding your key to my list (also
> signed
> > > this and updated the key servers)
> > >
> > > downloaded rc tar and verified
> > > ```
> > > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > > gpg:                using RSA key
> > DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> > sunchao@apache.org
> > > >"
> > > [full]
> > >
> > >
> > > > cat hadoop-3.3.2.tar.gz.sha512
> > > SHA512 (hadoop-3.3.2.tar.gz) =
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >
> > > > shasum -a 512 hadoop-3.3.2.tar.gz
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >  hadoop-3.3.2.tar.gz
> > > ```
> > >
> > >
> > > *# cloudstore against staged artifacts*
> > > ```
> > > cd ~/.m2/repository/org/apache/hadoop
> > > find . -name \*3.3.2\* -print | xargs rm -r
> > > ```
> > > ensures no local builds have tainted the repo.
> > >
> > > in cloudstore mvn build without tests
> > > ```
> > > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > > ```
> > > this fetches all from asf staging
> > >
> > > ```
> > > Downloading from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > Downloaded from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > (11 kB at 20 kB/s)
> > > ```
> > > there's no tests there, but it did audit the download process. FWIW,
> that
> > > project has switched to logback, so I now have all hadoop imports
> > excluding
> > > slf4j and log4j. it takes too much effort right now.
> > >
> > > build works.
> > >
> > > tested abfs and s3a storediags, all happy
> > >
> > >
> > >
> > >
> > > *### google GCS against staged artifacts*
> > >
> > > gcs is now java 11 only, so I had to switch JVMs here.
> > >
> > > had to add a snapshots and staging profile, after which I could build
> and
> > > test.
> > >
> > > ```
> > >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > > ```
> > > two test failures were related to auth failures where the tests were
> > trying
> > > to raise exceptions but things failed differently
> > > ```
> > > [ERROR] Failures:
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > > unexpected exception type thrown; expected:<java.io
> > .FileNotFoundException>
> > > but was:<java.lang.IllegalArgumentException>
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > > value of: throwable.getMessage()
> > > expected: Failed to create GCS FS
> > > but was : A JSON key file may not be specified at the same time as
> > > credentials via configuration.
> > >
> > > ```
> > >
> > > I'm not worried here.
> > >
> > > ran cloudstore's diagnostics against gcs.
> > >
> > > Nice to see they are now collecting IOStatistics on their input
> streams.
> > we
> > > really need to get this collected through the parquet/orc libs and then
> > > through the query engines.
> > >
> > > ```
> > > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> > >
> > > ...
> > > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > > {counters=((stream_read_close_operations=1)
> > > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > > (stream_read_bytes=7) (stream_read_exceptions=0)
> > > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > > (stream_read_seek_forward_operations=0)
> > > (stream_read_operations_incomplete=1));
> > > gauges=();
> > > minimums=();
> > > maximums=();
> > > means=();
> > > }
> > > ...
> > > ```
> > >
> > > *### source*
> > >
> > > once I'd done builds and tests which fetched from staging, I did a
> local
> > > build and test
> > >
> > > repeated download/validate of source tarball, unzip/untar
> > >
> > > build with java11.
> > >
> > > I've not done the test run there, because that directory tree doesn't
> > have
> > > the credentials, and this mornings run was good.
> > >
> > > altogether then: very happy. tests good, downstream libraries building
> > and
> > > linking.
> > >
> > > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Hmm interesting. Let me check on this error. Thanks Mukund.

Chao

On Fri, Jan 21, 2022 at 4:42 AM Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Checked out the release tag. commit *6da346a358c *
> Seeing below error while compiling :
>
> Duplicate classes found:
>
>
>   Found in:
>
>     org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile
>
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile
>
>   Duplicate classes:
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$1.class
>
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*
>
> [*INFO*]
>
> [*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
> [02:17 min]
>
> [*INFO*] Apache Hadoop Client Packaging Invariants for Test .
> *FAILURE* [  0.221
> s]
>
> [*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*
>
> [*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *BUILD FAILURE*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] Total time:  02:18 min
>
> [*INFO*] Finished at: 2022-01-21T18:06:11+05:30
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*ERROR*] Failed to execute goal
> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
> *(enforce-banned-dependencies)* on project
> hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
> above for specific messages explaining why the rule failed.* -> *[Help 1]*
>
>
> On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org>
> wrote:
>
> > I'll find time to check out the RC bits.
> > I just feel bad that the tarball is now more than 600MB in size.
> >
> > On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran
> <stevel@cloudera.com.invalid
> > >
> > wrote:
> >
> > > *+1 binding.*
> > >
> > > reviewed binaries, source, artifacts in the staging maven repository in
> > > downstream builds. all good.
> > >
> > > *## test run*
> > >
> > > checked out the asf github repo at commit 6da346a358c into a location
> > > already set up with aws and azure test credentials
> > >
> > > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> > >  -Dmarkers=delete -Dscale
> > > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > > -DtestsThreadCount=6
> > >
> > > all happy
> > >
> > >
> > >
> > > *## binary*
> > > downloaded KEYS and imported, so adding your key to my list (also
> signed
> > > this and updated the key servers)
> > >
> > > downloaded rc tar and verified
> > > ```
> > > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > > gpg:                using RSA key
> > DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> > sunchao@apache.org
> > > >"
> > > [full]
> > >
> > >
> > > > cat hadoop-3.3.2.tar.gz.sha512
> > > SHA512 (hadoop-3.3.2.tar.gz) =
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >
> > > > shasum -a 512 hadoop-3.3.2.tar.gz
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >  hadoop-3.3.2.tar.gz
> > > ```
> > >
> > >
> > > *# cloudstore against staged artifacts*
> > > ```
> > > cd ~/.m2/repository/org/apache/hadoop
> > > find . -name \*3.3.2\* -print | xargs rm -r
> > > ```
> > > ensures no local builds have tainted the repo.
> > >
> > > in cloudstore mvn build without tests
> > > ```
> > > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > > ```
> > > this fetches all from asf staging
> > >
> > > ```
> > > Downloading from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > Downloaded from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > (11 kB at 20 kB/s)
> > > ```
> > > there's no tests there, but it did audit the download process. FWIW,
> that
> > > project has switched to logback, so I now have all hadoop imports
> > excluding
> > > slf4j and log4j. it takes too much effort right now.
> > >
> > > build works.
> > >
> > > tested abfs and s3a storediags, all happy
> > >
> > >
> > >
> > >
> > > *### google GCS against staged artifacts*
> > >
> > > gcs is now java 11 only, so I had to switch JVMs here.
> > >
> > > had to add a snapshots and staging profile, after which I could build
> and
> > > test.
> > >
> > > ```
> > >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > > ```
> > > two test failures were related to auth failures where the tests were
> > trying
> > > to raise exceptions but things failed differently
> > > ```
> > > [ERROR] Failures:
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > > unexpected exception type thrown; expected:<java.io
> > .FileNotFoundException>
> > > but was:<java.lang.IllegalArgumentException>
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > > value of: throwable.getMessage()
> > > expected: Failed to create GCS FS
> > > but was : A JSON key file may not be specified at the same time as
> > > credentials via configuration.
> > >
> > > ```
> > >
> > > I'm not worried here.
> > >
> > > ran cloudstore's diagnostics against gcs.
> > >
> > > Nice to see they are now collecting IOStatistics on their input
> streams.
> > we
> > > really need to get this collected through the parquet/orc libs and then
> > > through the query engines.
> > >
> > > ```
> > > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> > >
> > > ...
> > > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > > {counters=((stream_read_close_operations=1)
> > > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > > (stream_read_bytes=7) (stream_read_exceptions=0)
> > > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > > (stream_read_seek_forward_operations=0)
> > > (stream_read_operations_incomplete=1));
> > > gauges=();
> > > minimums=();
> > > maximums=();
> > > means=();
> > > }
> > > ...
> > > ```
> > >
> > > *### source*
> > >
> > > once I'd done builds and tests which fetched from staging, I did a
> local
> > > build and test
> > >
> > > repeated download/validate of source tarball, unzip/untar
> > >
> > > build with java11.
> > >
> > > I've not done the test run there, because that directory tree doesn't
> > have
> > > the credentials, and this mornings run was good.
> > >
> > > altogether then: very happy. tests good, downstream libraries building
> > and
> > > linking.
> > >
> > > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Hmm interesting. Let me check on this error. Thanks Mukund.

Chao

On Fri, Jan 21, 2022 at 4:42 AM Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Checked out the release tag. commit *6da346a358c *
> Seeing below error while compiling :
>
> Duplicate classes found:
>
>
>   Found in:
>
>     org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile
>
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile
>
>   Duplicate classes:
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$1.class
>
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*
>
> [*INFO*]
>
> [*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
> [02:17 min]
>
> [*INFO*] Apache Hadoop Client Packaging Invariants for Test .
> *FAILURE* [  0.221
> s]
>
> [*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*
>
> [*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *BUILD FAILURE*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] Total time:  02:18 min
>
> [*INFO*] Finished at: 2022-01-21T18:06:11+05:30
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*ERROR*] Failed to execute goal
> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
> *(enforce-banned-dependencies)* on project
> hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
> above for specific messages explaining why the rule failed.* -> *[Help 1]*
>
>
> On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org>
> wrote:
>
> > I'll find time to check out the RC bits.
> > I just feel bad that the tarball is now more than 600MB in size.
> >
> > On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran
> <stevel@cloudera.com.invalid
> > >
> > wrote:
> >
> > > *+1 binding.*
> > >
> > > reviewed binaries, source, artifacts in the staging maven repository in
> > > downstream builds. all good.
> > >
> > > *## test run*
> > >
> > > checked out the asf github repo at commit 6da346a358c into a location
> > > already set up with aws and azure test credentials
> > >
> > > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> > >  -Dmarkers=delete -Dscale
> > > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > > -DtestsThreadCount=6
> > >
> > > all happy
> > >
> > >
> > >
> > > *## binary*
> > > downloaded KEYS and imported, so adding your key to my list (also
> signed
> > > this and updated the key servers)
> > >
> > > downloaded rc tar and verified
> > > ```
> > > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > > gpg:                using RSA key
> > DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> > sunchao@apache.org
> > > >"
> > > [full]
> > >
> > >
> > > > cat hadoop-3.3.2.tar.gz.sha512
> > > SHA512 (hadoop-3.3.2.tar.gz) =
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >
> > > > shasum -a 512 hadoop-3.3.2.tar.gz
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >  hadoop-3.3.2.tar.gz
> > > ```
> > >
> > >
> > > *# cloudstore against staged artifacts*
> > > ```
> > > cd ~/.m2/repository/org/apache/hadoop
> > > find . -name \*3.3.2\* -print | xargs rm -r
> > > ```
> > > ensures no local builds have tainted the repo.
> > >
> > > in cloudstore mvn build without tests
> > > ```
> > > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > > ```
> > > this fetches all from asf staging
> > >
> > > ```
> > > Downloading from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > Downloaded from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > (11 kB at 20 kB/s)
> > > ```
> > > there's no tests there, but it did audit the download process. FWIW,
> that
> > > project has switched to logback, so I now have all hadoop imports
> > excluding
> > > slf4j and log4j. it takes too much effort right now.
> > >
> > > build works.
> > >
> > > tested abfs and s3a storediags, all happy
> > >
> > >
> > >
> > >
> > > *### google GCS against staged artifacts*
> > >
> > > gcs is now java 11 only, so I had to switch JVMs here.
> > >
> > > had to add a snapshots and staging profile, after which I could build
> and
> > > test.
> > >
> > > ```
> > >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > > ```
> > > two test failures were related to auth failures where the tests were
> > trying
> > > to raise exceptions but things failed differently
> > > ```
> > > [ERROR] Failures:
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > > unexpected exception type thrown; expected:<java.io
> > .FileNotFoundException>
> > > but was:<java.lang.IllegalArgumentException>
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > > value of: throwable.getMessage()
> > > expected: Failed to create GCS FS
> > > but was : A JSON key file may not be specified at the same time as
> > > credentials via configuration.
> > >
> > > ```
> > >
> > > I'm not worried here.
> > >
> > > ran cloudstore's diagnostics against gcs.
> > >
> > > Nice to see they are now collecting IOStatistics on their input
> streams.
> > we
> > > really need to get this collected through the parquet/orc libs and then
> > > through the query engines.
> > >
> > > ```
> > > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> > >
> > > ...
> > > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > > {counters=((stream_read_close_operations=1)
> > > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > > (stream_read_bytes=7) (stream_read_exceptions=0)
> > > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > > (stream_read_seek_forward_operations=0)
> > > (stream_read_operations_incomplete=1));
> > > gauges=();
> > > minimums=();
> > > maximums=();
> > > means=();
> > > }
> > > ...
> > > ```
> > >
> > > *### source*
> > >
> > > once I'd done builds and tests which fetched from staging, I did a
> local
> > > build and test
> > >
> > > repeated download/validate of source tarball, unzip/untar
> > >
> > > build with java11.
> > >
> > > I've not done the test run there, because that directory tree doesn't
> > have
> > > the credentials, and this mornings run was good.
> > >
> > > altogether then: very happy. tests good, downstream libraries building
> > and
> > > linking.
> > >
> > > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Hmm interesting. Let me check on this error. Thanks Mukund.

Chao

On Fri, Jan 21, 2022 at 4:42 AM Mukund Madhav Thakur
<mt...@cloudera.com.invalid> wrote:

> Checked out the release tag. commit *6da346a358c *
> Seeing below error while compiling :
>
> Duplicate classes found:
>
>
>   Found in:
>
>     org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile
>
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile
>
>   Duplicate classes:
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class
>
>     org/apache/hadoop/io/serializer/avro/AvroRecord$1.class
>
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*
>
> [*INFO*]
>
> [*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
> [02:17 min]
>
> [*INFO*] Apache Hadoop Client Packaging Invariants for Test .
> *FAILURE* [  0.221
> s]
>
> [*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*
>
> [*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*
>
> [*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*
>
> [*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] *BUILD FAILURE*
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*INFO*] Total time:  02:18 min
>
> [*INFO*] Finished at: 2022-01-21T18:06:11+05:30
>
> [*INFO*]
> *------------------------------------------------------------------------*
>
> [*ERROR*] Failed to execute goal
> org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
> *(enforce-banned-dependencies)* on project
> hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
> above for specific messages explaining why the rule failed.* -> *[Help 1]*
>
>
> On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org>
> wrote:
>
> > I'll find time to check out the RC bits.
> > I just feel bad that the tarball is now more than 600MB in size.
> >
> > On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran
> <stevel@cloudera.com.invalid
> > >
> > wrote:
> >
> > > *+1 binding.*
> > >
> > > reviewed binaries, source, artifacts in the staging maven repository in
> > > downstream builds. all good.
> > >
> > > *## test run*
> > >
> > > checked out the asf github repo at commit 6da346a358c into a location
> > > already set up with aws and azure test credentials
> > >
> > > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> > >  -Dmarkers=delete -Dscale
> > > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > > -DtestsThreadCount=6
> > >
> > > all happy
> > >
> > >
> > >
> > > *## binary*
> > > downloaded KEYS and imported, so adding your key to my list (also
> signed
> > > this and updated the key servers)
> > >
> > > downloaded rc tar and verified
> > > ```
> > > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > > gpg:                using RSA key
> > DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> > sunchao@apache.org
> > > >"
> > > [full]
> > >
> > >
> > > > cat hadoop-3.3.2.tar.gz.sha512
> > > SHA512 (hadoop-3.3.2.tar.gz) =
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >
> > > > shasum -a 512 hadoop-3.3.2.tar.gz
> > >
> > >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> > >  hadoop-3.3.2.tar.gz
> > > ```
> > >
> > >
> > > *# cloudstore against staged artifacts*
> > > ```
> > > cd ~/.m2/repository/org/apache/hadoop
> > > find . -name \*3.3.2\* -print | xargs rm -r
> > > ```
> > > ensures no local builds have tainted the repo.
> > >
> > > in cloudstore mvn build without tests
> > > ```
> > > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > > ```
> > > this fetches all from asf staging
> > >
> > > ```
> > > Downloading from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > Downloaded from ASF Staging:
> > >
> > >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > > (11 kB at 20 kB/s)
> > > ```
> > > there's no tests there, but it did audit the download process. FWIW,
> that
> > > project has switched to logback, so I now have all hadoop imports
> > excluding
> > > slf4j and log4j. it takes too much effort right now.
> > >
> > > build works.
> > >
> > > tested abfs and s3a storediags, all happy
> > >
> > >
> > >
> > >
> > > *### google GCS against staged artifacts*
> > >
> > > gcs is now java 11 only, so I had to switch JVMs here.
> > >
> > > had to add a snapshots and staging profile, after which I could build
> and
> > > test.
> > >
> > > ```
> > >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > > ```
> > > two test failures were related to auth failures where the tests were
> > trying
> > > to raise exceptions but things failed differently
> > > ```
> > > [ERROR] Failures:
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > > unexpected exception type thrown; expected:<java.io
> > .FileNotFoundException>
> > > but was:<java.lang.IllegalArgumentException>
> > > [ERROR]
> > >
> > >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > > value of: throwable.getMessage()
> > > expected: Failed to create GCS FS
> > > but was : A JSON key file may not be specified at the same time as
> > > credentials via configuration.
> > >
> > > ```
> > >
> > > I'm not worried here.
> > >
> > > ran cloudstore's diagnostics against gcs.
> > >
> > > Nice to see they are now collecting IOStatistics on their input
> streams.
> > we
> > > really need to get this collected through the parquet/orc libs and then
> > > through the query engines.
> > >
> > > ```
> > > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> > >
> > > ...
> > > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > > {counters=((stream_read_close_operations=1)
> > > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > > (stream_read_bytes=7) (stream_read_exceptions=0)
> > > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > > (stream_read_seek_forward_operations=0)
> > > (stream_read_operations_incomplete=1));
> > > gauges=();
> > > minimums=();
> > > maximums=();
> > > means=();
> > > }
> > > ...
> > > ```
> > >
> > > *### source*
> > >
> > > once I'd done builds and tests which fetched from staging, I did a
> local
> > > build and test
> > >
> > > repeated download/validate of source tarball, unzip/untar
> > >
> > > build with java11.
> > >
> > > I've not done the test run there, because that directory tree doesn't
> > have
> > > the credentials, and this mornings run was good.
> > >
> > > altogether then: very happy. tests good, downstream libraries building
> > and
> > > linking.
> > >
> > > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Checked out the release tag. commit *6da346a358c *
Seeing below error while compiling :

Duplicate classes found:


  Found in:

    org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile

    org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile

  Duplicate classes:

    org/apache/hadoop/io/serializer/avro/AvroRecord.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$1.class


[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*

[*INFO*]

[*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
[02:17 min]

[*INFO*] Apache Hadoop Client Packaging Invariants for Test .
*FAILURE* [  0.221
s]

[*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*

[*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*

[*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*

[*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *BUILD FAILURE*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] Total time:  02:18 min

[*INFO*] Finished at: 2022-01-21T18:06:11+05:30

[*INFO*]
*------------------------------------------------------------------------*

[*ERROR*] Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
*(enforce-banned-dependencies)* on project
hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
above for specific messages explaining why the rule failed.* -> *[Help 1]*


On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org> wrote:

> I'll find time to check out the RC bits.
> I just feel bad that the tarball is now more than 600MB in size.
>
> On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <stevel@cloudera.com.invalid
> >
> wrote:
>
> > *+1 binding.*
> >
> > reviewed binaries, source, artifacts in the staging maven repository in
> > downstream builds. all good.
> >
> > *## test run*
> >
> > checked out the asf github repo at commit 6da346a358c into a location
> > already set up with aws and azure test credentials
> >
> > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> >  -Dmarkers=delete -Dscale
> > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > -DtestsThreadCount=6
> >
> > all happy
> >
> >
> >
> > *## binary*
> > downloaded KEYS and imported, so adding your key to my list (also signed
> > this and updated the key servers)
> >
> > downloaded rc tar and verified
> > ```
> > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > gpg:                using RSA key
> DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> sunchao@apache.org
> > >"
> > [full]
> >
> >
> > > cat hadoop-3.3.2.tar.gz.sha512
> > SHA512 (hadoop-3.3.2.tar.gz) =
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >
> > > shasum -a 512 hadoop-3.3.2.tar.gz
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >  hadoop-3.3.2.tar.gz
> > ```
> >
> >
> > *# cloudstore against staged artifacts*
> > ```
> > cd ~/.m2/repository/org/apache/hadoop
> > find . -name \*3.3.2\* -print | xargs rm -r
> > ```
> > ensures no local builds have tainted the repo.
> >
> > in cloudstore mvn build without tests
> > ```
> > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > ```
> > this fetches all from asf staging
> >
> > ```
> > Downloading from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > Downloaded from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > (11 kB at 20 kB/s)
> > ```
> > there's no tests there, but it did audit the download process. FWIW, that
> > project has switched to logback, so I now have all hadoop imports
> excluding
> > slf4j and log4j. it takes too much effort right now.
> >
> > build works.
> >
> > tested abfs and s3a storediags, all happy
> >
> >
> >
> >
> > *### google GCS against staged artifacts*
> >
> > gcs is now java 11 only, so I had to switch JVMs here.
> >
> > had to add a snapshots and staging profile, after which I could build and
> > test.
> >
> > ```
> >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > ```
> > two test failures were related to auth failures where the tests were
> trying
> > to raise exceptions but things failed differently
> > ```
> > [ERROR] Failures:
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > unexpected exception type thrown; expected:<java.io
> .FileNotFoundException>
> > but was:<java.lang.IllegalArgumentException>
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > value of: throwable.getMessage()
> > expected: Failed to create GCS FS
> > but was : A JSON key file may not be specified at the same time as
> > credentials via configuration.
> >
> > ```
> >
> > I'm not worried here.
> >
> > ran cloudstore's diagnostics against gcs.
> >
> > Nice to see they are now collecting IOStatistics on their input streams.
> we
> > really need to get this collected through the parquet/orc libs and then
> > through the query engines.
> >
> > ```
> > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> >
> > ...
> > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > {counters=((stream_read_close_operations=1)
> > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > (stream_read_bytes=7) (stream_read_exceptions=0)
> > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > (stream_read_seek_forward_operations=0)
> > (stream_read_operations_incomplete=1));
> > gauges=();
> > minimums=();
> > maximums=();
> > means=();
> > }
> > ...
> > ```
> >
> > *### source*
> >
> > once I'd done builds and tests which fetched from staging, I did a local
> > build and test
> >
> > repeated download/validate of source tarball, unzip/untar
> >
> > build with java11.
> >
> > I've not done the test run there, because that directory tree doesn't
> have
> > the credentials, and this mornings run was good.
> >
> > altogether then: very happy. tests good, downstream libraries building
> and
> > linking.
> >
> > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Checked out the release tag. commit *6da346a358c *
Seeing below error while compiling :

Duplicate classes found:


  Found in:

    org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile

    org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile

  Duplicate classes:

    org/apache/hadoop/io/serializer/avro/AvroRecord.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$1.class


[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*

[*INFO*]

[*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
[02:17 min]

[*INFO*] Apache Hadoop Client Packaging Invariants for Test .
*FAILURE* [  0.221
s]

[*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*

[*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*

[*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*

[*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *BUILD FAILURE*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] Total time:  02:18 min

[*INFO*] Finished at: 2022-01-21T18:06:11+05:30

[*INFO*]
*------------------------------------------------------------------------*

[*ERROR*] Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
*(enforce-banned-dependencies)* on project
hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
above for specific messages explaining why the rule failed.* -> *[Help 1]*


On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org> wrote:

> I'll find time to check out the RC bits.
> I just feel bad that the tarball is now more than 600MB in size.
>
> On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <stevel@cloudera.com.invalid
> >
> wrote:
>
> > *+1 binding.*
> >
> > reviewed binaries, source, artifacts in the staging maven repository in
> > downstream builds. all good.
> >
> > *## test run*
> >
> > checked out the asf github repo at commit 6da346a358c into a location
> > already set up with aws and azure test credentials
> >
> > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> >  -Dmarkers=delete -Dscale
> > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > -DtestsThreadCount=6
> >
> > all happy
> >
> >
> >
> > *## binary*
> > downloaded KEYS and imported, so adding your key to my list (also signed
> > this and updated the key servers)
> >
> > downloaded rc tar and verified
> > ```
> > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > gpg:                using RSA key
> DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> sunchao@apache.org
> > >"
> > [full]
> >
> >
> > > cat hadoop-3.3.2.tar.gz.sha512
> > SHA512 (hadoop-3.3.2.tar.gz) =
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >
> > > shasum -a 512 hadoop-3.3.2.tar.gz
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >  hadoop-3.3.2.tar.gz
> > ```
> >
> >
> > *# cloudstore against staged artifacts*
> > ```
> > cd ~/.m2/repository/org/apache/hadoop
> > find . -name \*3.3.2\* -print | xargs rm -r
> > ```
> > ensures no local builds have tainted the repo.
> >
> > in cloudstore mvn build without tests
> > ```
> > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > ```
> > this fetches all from asf staging
> >
> > ```
> > Downloading from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > Downloaded from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > (11 kB at 20 kB/s)
> > ```
> > there's no tests there, but it did audit the download process. FWIW, that
> > project has switched to logback, so I now have all hadoop imports
> excluding
> > slf4j and log4j. it takes too much effort right now.
> >
> > build works.
> >
> > tested abfs and s3a storediags, all happy
> >
> >
> >
> >
> > *### google GCS against staged artifacts*
> >
> > gcs is now java 11 only, so I had to switch JVMs here.
> >
> > had to add a snapshots and staging profile, after which I could build and
> > test.
> >
> > ```
> >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > ```
> > two test failures were related to auth failures where the tests were
> trying
> > to raise exceptions but things failed differently
> > ```
> > [ERROR] Failures:
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > unexpected exception type thrown; expected:<java.io
> .FileNotFoundException>
> > but was:<java.lang.IllegalArgumentException>
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > value of: throwable.getMessage()
> > expected: Failed to create GCS FS
> > but was : A JSON key file may not be specified at the same time as
> > credentials via configuration.
> >
> > ```
> >
> > I'm not worried here.
> >
> > ran cloudstore's diagnostics against gcs.
> >
> > Nice to see they are now collecting IOStatistics on their input streams.
> we
> > really need to get this collected through the parquet/orc libs and then
> > through the query engines.
> >
> > ```
> > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> >
> > ...
> > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > {counters=((stream_read_close_operations=1)
> > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > (stream_read_bytes=7) (stream_read_exceptions=0)
> > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > (stream_read_seek_forward_operations=0)
> > (stream_read_operations_incomplete=1));
> > gauges=();
> > minimums=();
> > maximums=();
> > means=();
> > }
> > ...
> > ```
> >
> > *### source*
> >
> > once I'd done builds and tests which fetched from staging, I did a local
> > build and test
> >
> > repeated download/validate of source tarball, unzip/untar
> >
> > build with java11.
> >
> > I've not done the test run there, because that directory tree doesn't
> have
> > the credentials, and this mornings run was good.
> >
> > altogether then: very happy. tests good, downstream libraries building
> and
> > linking.
> >
> > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Checked out the release tag. commit *6da346a358c *
Seeing below error while compiling :

Duplicate classes found:


  Found in:

    org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile

    org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile

  Duplicate classes:

    org/apache/hadoop/io/serializer/avro/AvroRecord.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$1.class


[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*

[*INFO*]

[*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
[02:17 min]

[*INFO*] Apache Hadoop Client Packaging Invariants for Test .
*FAILURE* [  0.221
s]

[*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*

[*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*

[*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*

[*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *BUILD FAILURE*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] Total time:  02:18 min

[*INFO*] Finished at: 2022-01-21T18:06:11+05:30

[*INFO*]
*------------------------------------------------------------------------*

[*ERROR*] Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
*(enforce-banned-dependencies)* on project
hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
above for specific messages explaining why the rule failed.* -> *[Help 1]*


On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org> wrote:

> I'll find time to check out the RC bits.
> I just feel bad that the tarball is now more than 600MB in size.
>
> On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <stevel@cloudera.com.invalid
> >
> wrote:
>
> > *+1 binding.*
> >
> > reviewed binaries, source, artifacts in the staging maven repository in
> > downstream builds. all good.
> >
> > *## test run*
> >
> > checked out the asf github repo at commit 6da346a358c into a location
> > already set up with aws and azure test credentials
> >
> > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> >  -Dmarkers=delete -Dscale
> > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > -DtestsThreadCount=6
> >
> > all happy
> >
> >
> >
> > *## binary*
> > downloaded KEYS and imported, so adding your key to my list (also signed
> > this and updated the key servers)
> >
> > downloaded rc tar and verified
> > ```
> > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > gpg:                using RSA key
> DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> sunchao@apache.org
> > >"
> > [full]
> >
> >
> > > cat hadoop-3.3.2.tar.gz.sha512
> > SHA512 (hadoop-3.3.2.tar.gz) =
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >
> > > shasum -a 512 hadoop-3.3.2.tar.gz
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >  hadoop-3.3.2.tar.gz
> > ```
> >
> >
> > *# cloudstore against staged artifacts*
> > ```
> > cd ~/.m2/repository/org/apache/hadoop
> > find . -name \*3.3.2\* -print | xargs rm -r
> > ```
> > ensures no local builds have tainted the repo.
> >
> > in cloudstore mvn build without tests
> > ```
> > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > ```
> > this fetches all from asf staging
> >
> > ```
> > Downloading from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > Downloaded from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > (11 kB at 20 kB/s)
> > ```
> > there's no tests there, but it did audit the download process. FWIW, that
> > project has switched to logback, so I now have all hadoop imports
> excluding
> > slf4j and log4j. it takes too much effort right now.
> >
> > build works.
> >
> > tested abfs and s3a storediags, all happy
> >
> >
> >
> >
> > *### google GCS against staged artifacts*
> >
> > gcs is now java 11 only, so I had to switch JVMs here.
> >
> > had to add a snapshots and staging profile, after which I could build and
> > test.
> >
> > ```
> >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > ```
> > two test failures were related to auth failures where the tests were
> trying
> > to raise exceptions but things failed differently
> > ```
> > [ERROR] Failures:
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > unexpected exception type thrown; expected:<java.io
> .FileNotFoundException>
> > but was:<java.lang.IllegalArgumentException>
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > value of: throwable.getMessage()
> > expected: Failed to create GCS FS
> > but was : A JSON key file may not be specified at the same time as
> > credentials via configuration.
> >
> > ```
> >
> > I'm not worried here.
> >
> > ran cloudstore's diagnostics against gcs.
> >
> > Nice to see they are now collecting IOStatistics on their input streams.
> we
> > really need to get this collected through the parquet/orc libs and then
> > through the query engines.
> >
> > ```
> > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> >
> > ...
> > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > {counters=((stream_read_close_operations=1)
> > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > (stream_read_bytes=7) (stream_read_exceptions=0)
> > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > (stream_read_seek_forward_operations=0)
> > (stream_read_operations_incomplete=1));
> > gauges=();
> > minimums=();
> > maximums=();
> > means=();
> > }
> > ...
> > ```
> >
> > *### source*
> >
> > once I'd done builds and tests which fetched from staging, I did a local
> > build and test
> >
> > repeated download/validate of source tarball, unzip/untar
> >
> > build with java11.
> >
> > I've not done the test run there, because that directory tree doesn't
> have
> > the credentials, and this mornings run was good.
> >
> > altogether then: very happy. tests good, downstream libraries building
> and
> > linking.
> >
> > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Mukund Madhav Thakur <mt...@cloudera.com.INVALID>.
Checked out the release tag. commit *6da346a358c *
Seeing below error while compiling :

Duplicate classes found:


  Found in:

    org.apache.hadoop:hadoop-client-api:jar:3.3.2:compile

    org.apache.hadoop:hadoop-client-minicluster:jar:3.3.2:compile

  Duplicate classes:

    org/apache/hadoop/io/serializer/avro/AvroRecord.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$Builder.class

    org/apache/hadoop/io/serializer/avro/AvroRecord$1.class


[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *Reactor Summary for Apache Hadoop Client Test Minicluster 3.3.2:*

[*INFO*]

[*INFO*] Apache Hadoop Client Test Minicluster .............. *SUCCESS*
[02:17 min]

[*INFO*] Apache Hadoop Client Packaging Invariants for Test .
*FAILURE* [  0.221
s]

[*INFO*] Apache Hadoop Client Packaging Integration Tests ... *SKIPPED*

[*INFO*] Apache Hadoop Distribution ......................... *SKIPPED*

[*INFO*] Apache Hadoop Client Modules ....................... *SKIPPED*

[*INFO*] Apache Hadoop Tencent COS Support .................. *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage ........................ *SKIPPED*

[*INFO*] Apache Hadoop Cloud Storage Project ................ *SKIPPED*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] *BUILD FAILURE*

[*INFO*]
*------------------------------------------------------------------------*

[*INFO*] Total time:  02:18 min

[*INFO*] Finished at: 2022-01-21T18:06:11+05:30

[*INFO*]
*------------------------------------------------------------------------*

[*ERROR*] Failed to execute goal
org.apache.maven.plugins:maven-enforcer-plugin:3.0.0-M1:enforce
*(enforce-banned-dependencies)* on project
hadoop-client-check-test-invariants: *Some Enforcer rules have failed. Look
above for specific messages explaining why the rule failed.* -> *[Help 1]*


On Fri, Jan 21, 2022 at 9:38 AM Wei-Chiu Chuang <we...@apache.org> wrote:

> I'll find time to check out the RC bits.
> I just feel bad that the tarball is now more than 600MB in size.
>
> On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <stevel@cloudera.com.invalid
> >
> wrote:
>
> > *+1 binding.*
> >
> > reviewed binaries, source, artifacts in the staging maven repository in
> > downstream builds. all good.
> >
> > *## test run*
> >
> > checked out the asf github repo at commit 6da346a358c into a location
> > already set up with aws and azure test credentials
> >
> > ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
> >  -Dmarkers=delete -Dscale
> > and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> > -DtestsThreadCount=6
> >
> > all happy
> >
> >
> >
> > *## binary*
> > downloaded KEYS and imported, so adding your key to my list (also signed
> > this and updated the key servers)
> >
> > downloaded rc tar and verified
> > ```
> > > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> > gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> > gpg:                using RSA key
> DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> > gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <
> sunchao@apache.org
> > >"
> > [full]
> >
> >
> > > cat hadoop-3.3.2.tar.gz.sha512
> > SHA512 (hadoop-3.3.2.tar.gz) =
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >
> > > shasum -a 512 hadoop-3.3.2.tar.gz
> >
> >
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
> >  hadoop-3.3.2.tar.gz
> > ```
> >
> >
> > *# cloudstore against staged artifacts*
> > ```
> > cd ~/.m2/repository/org/apache/hadoop
> > find . -name \*3.3.2\* -print | xargs rm -r
> > ```
> > ensures no local builds have tainted the repo.
> >
> > in cloudstore mvn build without tests
> > ```
> > mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> > ```
> > this fetches all from asf staging
> >
> > ```
> > Downloading from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > Downloaded from ASF Staging:
> >
> >
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> > (11 kB at 20 kB/s)
> > ```
> > there's no tests there, but it did audit the download process. FWIW, that
> > project has switched to logback, so I now have all hadoop imports
> excluding
> > slf4j and log4j. it takes too much effort right now.
> >
> > build works.
> >
> > tested abfs and s3a storediags, all happy
> >
> >
> >
> >
> > *### google GCS against staged artifacts*
> >
> > gcs is now java 11 only, so I had to switch JVMs here.
> >
> > had to add a snapshots and staging profile, after which I could build and
> > test.
> >
> > ```
> >  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> > ```
> > two test failures were related to auth failures where the tests were
> trying
> > to raise exceptions but things failed differently
> > ```
> > [ERROR] Failures:
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> > unexpected exception type thrown; expected:<java.io
> .FileNotFoundException>
> > but was:<java.lang.IllegalArgumentException>
> > [ERROR]
> >
> >
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> > value of: throwable.getMessage()
> > expected: Failed to create GCS FS
> > but was : A JSON key file may not be specified at the same time as
> > credentials via configuration.
> >
> > ```
> >
> > I'm not worried here.
> >
> > ran cloudstore's diagnostics against gcs.
> >
> > Nice to see they are now collecting IOStatistics on their input streams.
> we
> > really need to get this collected through the parquet/orc libs and then
> > through the query engines.
> >
> > ```
> > > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
> >
> > ...
> > 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> > (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> > gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> > input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> > com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> > {counters=((stream_read_close_operations=1)
> > (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> > (stream_read_bytes=7) (stream_read_exceptions=0)
> > (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> > (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> > (stream_read_seek_forward_operations=0)
> > (stream_read_operations_incomplete=1));
> > gauges=();
> > minimums=();
> > maximums=();
> > means=();
> > }
> > ...
> > ```
> >
> > *### source*
> >
> > once I'd done builds and tests which fetched from staging, I did a local
> > build and test
> >
> > repeated download/validate of source tarball, unzip/untar
> >
> > build with java11.
> >
> > I've not done the test run there, because that directory tree doesn't
> have
> > the credentials, and this mornings run was good.
> >
> > altogether then: very happy. tests good, downstream libraries building
> and
> > linking.
> >
> > On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Wei-Chiu Chuang <we...@apache.org>.
I'll find time to check out the RC bits.
I just feel bad that the tarball is now more than 600MB in size.

On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> *+1 binding.*
>
> reviewed binaries, source, artifacts in the staging maven repository in
> downstream builds. all good.
>
> *## test run*
>
> checked out the asf github repo at commit 6da346a358c into a location
> already set up with aws and azure test credentials
>
> ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
>  -Dmarkers=delete -Dscale
> and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> -DtestsThreadCount=6
>
> all happy
>
>
>
> *## binary*
> downloaded KEYS and imported, so adding your key to my list (also signed
> this and updated the key servers)
>
> downloaded rc tar and verified
> ```
> > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <sunchao@apache.org
> >"
> [full]
>
>
> > cat hadoop-3.3.2.tar.gz.sha512
> SHA512 (hadoop-3.3.2.tar.gz) =
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>
> > shasum -a 512 hadoop-3.3.2.tar.gz
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>  hadoop-3.3.2.tar.gz
> ```
>
>
> *# cloudstore against staged artifacts*
> ```
> cd ~/.m2/repository/org/apache/hadoop
> find . -name \*3.3.2\* -print | xargs rm -r
> ```
> ensures no local builds have tainted the repo.
>
> in cloudstore mvn build without tests
> ```
> mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> ```
> this fetches all from asf staging
>
> ```
> Downloading from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> Downloaded from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> (11 kB at 20 kB/s)
> ```
> there's no tests there, but it did audit the download process. FWIW, that
> project has switched to logback, so I now have all hadoop imports excluding
> slf4j and log4j. it takes too much effort right now.
>
> build works.
>
> tested abfs and s3a storediags, all happy
>
>
>
>
> *### google GCS against staged artifacts*
>
> gcs is now java 11 only, so I had to switch JVMs here.
>
> had to add a snapshots and staging profile, after which I could build and
> test.
>
> ```
>  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> ```
> two test failures were related to auth failures where the tests were trying
> to raise exceptions but things failed differently
> ```
> [ERROR] Failures:
> [ERROR]
>
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> unexpected exception type thrown; expected:<java.io.FileNotFoundException>
> but was:<java.lang.IllegalArgumentException>
> [ERROR]
>
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> value of: throwable.getMessage()
> expected: Failed to create GCS FS
> but was : A JSON key file may not be specified at the same time as
> credentials via configuration.
>
> ```
>
> I'm not worried here.
>
> ran cloudstore's diagnostics against gcs.
>
> Nice to see they are now collecting IOStatistics on their input streams. we
> really need to get this collected through the parquet/orc libs and then
> through the query engines.
>
> ```
> > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
>
> ...
> 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> {counters=((stream_read_close_operations=1)
> (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> (stream_read_bytes=7) (stream_read_exceptions=0)
> (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> (stream_read_seek_forward_operations=0)
> (stream_read_operations_incomplete=1));
> gauges=();
> minimums=();
> maximums=();
> means=();
> }
> ...
> ```
>
> *### source*
>
> once I'd done builds and tests which fetched from staging, I did a local
> build and test
>
> repeated download/validate of source tarball, unzip/untar
>
> build with java11.
>
> I've not done the test run there, because that directory tree doesn't have
> the credentials, and this mornings run was good.
>
> altogether then: very happy. tests good, downstream libraries building and
> linking.
>
> On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Wei-Chiu Chuang <we...@apache.org>.
I'll find time to check out the RC bits.
I just feel bad that the tarball is now more than 600MB in size.

On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> *+1 binding.*
>
> reviewed binaries, source, artifacts in the staging maven repository in
> downstream builds. all good.
>
> *## test run*
>
> checked out the asf github repo at commit 6da346a358c into a location
> already set up with aws and azure test credentials
>
> ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
>  -Dmarkers=delete -Dscale
> and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> -DtestsThreadCount=6
>
> all happy
>
>
>
> *## binary*
> downloaded KEYS and imported, so adding your key to my list (also signed
> this and updated the key servers)
>
> downloaded rc tar and verified
> ```
> > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <sunchao@apache.org
> >"
> [full]
>
>
> > cat hadoop-3.3.2.tar.gz.sha512
> SHA512 (hadoop-3.3.2.tar.gz) =
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>
> > shasum -a 512 hadoop-3.3.2.tar.gz
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>  hadoop-3.3.2.tar.gz
> ```
>
>
> *# cloudstore against staged artifacts*
> ```
> cd ~/.m2/repository/org/apache/hadoop
> find . -name \*3.3.2\* -print | xargs rm -r
> ```
> ensures no local builds have tainted the repo.
>
> in cloudstore mvn build without tests
> ```
> mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> ```
> this fetches all from asf staging
>
> ```
> Downloading from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> Downloaded from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> (11 kB at 20 kB/s)
> ```
> there's no tests there, but it did audit the download process. FWIW, that
> project has switched to logback, so I now have all hadoop imports excluding
> slf4j and log4j. it takes too much effort right now.
>
> build works.
>
> tested abfs and s3a storediags, all happy
>
>
>
>
> *### google GCS against staged artifacts*
>
> gcs is now java 11 only, so I had to switch JVMs here.
>
> had to add a snapshots and staging profile, after which I could build and
> test.
>
> ```
>  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> ```
> two test failures were related to auth failures where the tests were trying
> to raise exceptions but things failed differently
> ```
> [ERROR] Failures:
> [ERROR]
>
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> unexpected exception type thrown; expected:<java.io.FileNotFoundException>
> but was:<java.lang.IllegalArgumentException>
> [ERROR]
>
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> value of: throwable.getMessage()
> expected: Failed to create GCS FS
> but was : A JSON key file may not be specified at the same time as
> credentials via configuration.
>
> ```
>
> I'm not worried here.
>
> ran cloudstore's diagnostics against gcs.
>
> Nice to see they are now collecting IOStatistics on their input streams. we
> really need to get this collected through the parquet/orc libs and then
> through the query engines.
>
> ```
> > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
>
> ...
> 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> {counters=((stream_read_close_operations=1)
> (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> (stream_read_bytes=7) (stream_read_exceptions=0)
> (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> (stream_read_seek_forward_operations=0)
> (stream_read_operations_incomplete=1));
> gauges=();
> minimums=();
> maximums=();
> means=();
> }
> ...
> ```
>
> *### source*
>
> once I'd done builds and tests which fetched from staging, I did a local
> build and test
>
> repeated download/validate of source tarball, unzip/untar
>
> build with java11.
>
> I've not done the test run there, because that directory tree doesn't have
> the credentials, and this mornings run was good.
>
> altogether then: very happy. tests good, downstream libraries building and
> linking.
>
> On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Wei-Chiu Chuang <we...@apache.org>.
I'll find time to check out the RC bits.
I just feel bad that the tarball is now more than 600MB in size.

On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> *+1 binding.*
>
> reviewed binaries, source, artifacts in the staging maven repository in
> downstream builds. all good.
>
> *## test run*
>
> checked out the asf github repo at commit 6da346a358c into a location
> already set up with aws and azure test credentials
>
> ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
>  -Dmarkers=delete -Dscale
> and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> -DtestsThreadCount=6
>
> all happy
>
>
>
> *## binary*
> downloaded KEYS and imported, so adding your key to my list (also signed
> this and updated the key servers)
>
> downloaded rc tar and verified
> ```
> > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <sunchao@apache.org
> >"
> [full]
>
>
> > cat hadoop-3.3.2.tar.gz.sha512
> SHA512 (hadoop-3.3.2.tar.gz) =
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>
> > shasum -a 512 hadoop-3.3.2.tar.gz
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>  hadoop-3.3.2.tar.gz
> ```
>
>
> *# cloudstore against staged artifacts*
> ```
> cd ~/.m2/repository/org/apache/hadoop
> find . -name \*3.3.2\* -print | xargs rm -r
> ```
> ensures no local builds have tainted the repo.
>
> in cloudstore mvn build without tests
> ```
> mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> ```
> this fetches all from asf staging
>
> ```
> Downloading from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> Downloaded from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> (11 kB at 20 kB/s)
> ```
> there's no tests there, but it did audit the download process. FWIW, that
> project has switched to logback, so I now have all hadoop imports excluding
> slf4j and log4j. it takes too much effort right now.
>
> build works.
>
> tested abfs and s3a storediags, all happy
>
>
>
>
> *### google GCS against staged artifacts*
>
> gcs is now java 11 only, so I had to switch JVMs here.
>
> had to add a snapshots and staging profile, after which I could build and
> test.
>
> ```
>  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> ```
> two test failures were related to auth failures where the tests were trying
> to raise exceptions but things failed differently
> ```
> [ERROR] Failures:
> [ERROR]
>
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> unexpected exception type thrown; expected:<java.io.FileNotFoundException>
> but was:<java.lang.IllegalArgumentException>
> [ERROR]
>
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> value of: throwable.getMessage()
> expected: Failed to create GCS FS
> but was : A JSON key file may not be specified at the same time as
> credentials via configuration.
>
> ```
>
> I'm not worried here.
>
> ran cloudstore's diagnostics against gcs.
>
> Nice to see they are now collecting IOStatistics on their input streams. we
> really need to get this collected through the parquet/orc libs and then
> through the query engines.
>
> ```
> > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
>
> ...
> 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> {counters=((stream_read_close_operations=1)
> (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> (stream_read_bytes=7) (stream_read_exceptions=0)
> (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> (stream_read_seek_forward_operations=0)
> (stream_read_operations_incomplete=1));
> gauges=();
> minimums=();
> maximums=();
> means=();
> }
> ...
> ```
>
> *### source*
>
> once I'd done builds and tests which fetched from staging, I did a local
> build and test
>
> repeated download/validate of source tarball, unzip/untar
>
> build with java11.
>
> I've not done the test run there, because that directory tree doesn't have
> the credentials, and this mornings run was good.
>
> altogether then: very happy. tests good, downstream libraries building and
> linking.
>
> On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Wei-Chiu Chuang <we...@apache.org>.
I'll find time to check out the RC bits.
I just feel bad that the tarball is now more than 600MB in size.

On Fri, Jan 21, 2022 at 2:23 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> *+1 binding.*
>
> reviewed binaries, source, artifacts in the staging maven repository in
> downstream builds. all good.
>
> *## test run*
>
> checked out the asf github repo at commit 6da346a358c into a location
> already set up with aws and azure test credentials
>
> ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
>  -Dmarkers=delete -Dscale
> and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
> -DtestsThreadCount=6
>
> all happy
>
>
>
> *## binary*
> downloaded KEYS and imported, so adding your key to my list (also signed
> this and updated the key servers)
>
> downloaded rc tar and verified
> ```
> > gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
> gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
> gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
> gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <sunchao@apache.org
> >"
> [full]
>
>
> > cat hadoop-3.3.2.tar.gz.sha512
> SHA512 (hadoop-3.3.2.tar.gz) =
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>
> > shasum -a 512 hadoop-3.3.2.tar.gz
>
> cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
>  hadoop-3.3.2.tar.gz
> ```
>
>
> *# cloudstore against staged artifacts*
> ```
> cd ~/.m2/repository/org/apache/hadoop
> find . -name \*3.3.2\* -print | xargs rm -r
> ```
> ensures no local builds have tainted the repo.
>
> in cloudstore mvn build without tests
> ```
> mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
> ```
> this fetches all from asf staging
>
> ```
> Downloading from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> Downloaded from ASF Staging:
>
> https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
> (11 kB at 20 kB/s)
> ```
> there's no tests there, but it did audit the download process. FWIW, that
> project has switched to logback, so I now have all hadoop imports excluding
> slf4j and log4j. it takes too much effort right now.
>
> build works.
>
> tested abfs and s3a storediags, all happy
>
>
>
>
> *### google GCS against staged artifacts*
>
> gcs is now java 11 only, so I had to switch JVMs here.
>
> had to add a snapshots and staging profile, after which I could build and
> test.
>
> ```
>  -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
> ```
> two test failures were related to auth failures where the tests were trying
> to raise exceptions but things failed differently
> ```
> [ERROR] Failures:
> [ERROR]
>
> GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
> unexpected exception type thrown; expected:<java.io.FileNotFoundException>
> but was:<java.lang.IllegalArgumentException>
> [ERROR]
>
> GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
> value of: throwable.getMessage()
> expected: Failed to create GCS FS
> but was : A JSON key file may not be specified at the same time as
> credentials via configuration.
>
> ```
>
> I'm not worried here.
>
> ran cloudstore's diagnostics against gcs.
>
> Nice to see they are now collecting IOStatistics on their input streams. we
> really need to get this collected through the parquet/orc libs and then
> through the query engines.
>
> ```
> > bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/
>
> ...
> 2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
> (StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
> gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
> input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d
> {counters=((stream_read_close_operations=1)
> (stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
> (stream_read_bytes=7) (stream_read_exceptions=0)
> (stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
> (stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
> (stream_read_seek_forward_operations=0)
> (stream_read_operations_incomplete=1));
> gauges=();
> minimums=();
> maximums=();
> means=();
> }
> ...
> ```
>
> *### source*
>
> once I'd done builds and tests which fetched from staging, I did a local
> build and test
>
> repeated download/validate of source tarball, unzip/untar
>
> build with java11.
>
> I've not done the test run there, because that directory tree doesn't have
> the credentials, and this mornings run was good.
>
> altogether then: very happy. tests good, downstream libraries building and
> linking.
>
> On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:
>
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
*+1 binding.*

reviewed binaries, source, artifacts in the staging maven repository in
downstream builds. all good.

*## test run*

checked out the asf github repo at commit 6da346a358c into a location
already set up with aws and azure test credentials

ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
 -Dmarkers=delete -Dscale
and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
-DtestsThreadCount=6

all happy



*## binary*
downloaded KEYS and imported, so adding your key to my list (also signed
this and updated the key servers)

downloaded rc tar and verified
```
> gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <su...@apache.org>"
[full]


> cat hadoop-3.3.2.tar.gz.sha512
SHA512 (hadoop-3.3.2.tar.gz) =
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d

> shasum -a 512 hadoop-3.3.2.tar.gz
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
 hadoop-3.3.2.tar.gz
```


*# cloudstore against staged artifacts*
```
cd ~/.m2/repository/org/apache/hadoop
find . -name \*3.3.2\* -print | xargs rm -r
```
ensures no local builds have tainted the repo.

in cloudstore mvn build without tests
```
mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
```
this fetches all from asf staging

```
Downloading from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
Downloaded from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
(11 kB at 20 kB/s)
```
there's no tests there, but it did audit the download process. FWIW, that
project has switched to logback, so I now have all hadoop imports excluding
slf4j and log4j. it takes too much effort right now.

build works.

tested abfs and s3a storediags, all happy




*### google GCS against staged artifacts*

gcs is now java 11 only, so I had to switch JVMs here.

had to add a snapshots and staging profile, after which I could build and
test.

```
 -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
```
two test failures were related to auth failures where the tests were trying
to raise exceptions but things failed differently
```
[ERROR] Failures:
[ERROR]
GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
unexpected exception type thrown; expected:<java.io.FileNotFoundException>
but was:<java.lang.IllegalArgumentException>
[ERROR]
GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
value of: throwable.getMessage()
expected: Failed to create GCS FS
but was : A JSON key file may not be specified at the same time as
credentials via configuration.

```

I'm not worried here.

ran cloudstore's diagnostics against gcs.

Nice to see they are now collecting IOStatistics on their input streams. we
really need to get this collected through the parquet/orc libs and then
through the query engines.

```
> bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/

...
2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
(StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d{counters=((stream_read_close_operations=1)
(stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
(stream_read_bytes=7) (stream_read_exceptions=0)
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_forward_operations=0)
(stream_read_operations_incomplete=1));
gauges=();
minimums=();
maximums=();
means=();
}
...
```

*### source*

once I'd done builds and tests which fetched from staging, I did a local
build and test

repeated download/validate of source tarball, unzip/untar

build with java11.

I've not done the test run there, because that directory tree doesn't have
the credentials, and this mornings run was good.

altogether then: very happy. tests good, downstream libraries building and
linking.

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
thanks, i'm on it..will run the aws and azure tests and then play with the
artifacts

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Stack <st...@duboce.net>.
+1 (binding)

        * Signature: ok
        * Checksum : ok
        * Rat check (1.8.0_191): ok
         - mvn clean apache-rat:check
        * Built from source (1.8.0_191): ok
         - mvn clean install  -DskipTests

Poking around in the binary, it looks good. Unpacked site. Looks right.
Checked a few links work.

Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
chaos. Worked like 3.3.1...

I tried to build with 3.8.1 maven and got the below.

[ERROR] Failed to execute goal on project
hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
project
org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
for org.restlet.
jee:org.restlet:jar:2.3.0: Could not transfer artifact
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]

I used 3.6.3 mvn instead (looks like a simple fix).

Thanks for packaging up this fat point release Chao Sun.

S

On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks for the update Steve!

Mukund: could you please share the command to get the error above? I tried
a few approaches but couldn't reproduce it :(

Thanks again!

Best,
Chao

On Mon, Jan 24, 2022 at 7:16 AM Steve Loughran <st...@cloudera.com> wrote:

>
> fix is in t disable auditing, which is now the default
> https://issues.apache.org/jira/browse/HADOOP-18094
>
> everything is OK for apps which retain the same fs instances for the life
> of the app, but not for Hive...
>
> will do a better fix ASAP where in exchange for loss of auditing after a
> GC event, only weak refs are held in maps private to the auditor.
>
> i will put that in hadoop common as i would want to use the same code in
> thread-levek IOStatistics tracking.
> there we;d demand create an IOStatistics snapshot per thread,  short lived
> worker threads for stream io would still update the stats of the thread the
> stream was created in. this will let lus collect stats on store io through
> the orc/paquet readers for each thread doing work for a job, and include
> them in job stats.
>
> and how would that be useful? well. look at this coimparison of job/task
> commit performance with the manifest committer
> https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks for the update Steve!

Mukund: could you please share the command to get the error above? I tried
a few approaches but couldn't reproduce it :(

Thanks again!

Best,
Chao

On Mon, Jan 24, 2022 at 7:16 AM Steve Loughran <st...@cloudera.com> wrote:

>
> fix is in t disable auditing, which is now the default
> https://issues.apache.org/jira/browse/HADOOP-18094
>
> everything is OK for apps which retain the same fs instances for the life
> of the app, but not for Hive...
>
> will do a better fix ASAP where in exchange for loss of auditing after a
> GC event, only weak refs are held in maps private to the auditor.
>
> i will put that in hadoop common as i would want to use the same code in
> thread-levek IOStatistics tracking.
> there we;d demand create an IOStatistics snapshot per thread,  short lived
> worker threads for stream io would still update the stats of the thread the
> stream was created in. this will let lus collect stats on store io through
> the orc/paquet readers for each thread doing work for a job, and include
> them in job stats.
>
> and how would that be useful? well. look at this coimparison of job/task
> commit performance with the manifest committer
> https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks for the update Steve!

Mukund: could you please share the command to get the error above? I tried
a few approaches but couldn't reproduce it :(

Thanks again!

Best,
Chao

On Mon, Jan 24, 2022 at 7:16 AM Steve Loughran <st...@cloudera.com> wrote:

>
> fix is in t disable auditing, which is now the default
> https://issues.apache.org/jira/browse/HADOOP-18094
>
> everything is OK for apps which retain the same fs instances for the life
> of the app, but not for Hive...
>
> will do a better fix ASAP where in exchange for loss of auditing after a
> GC event, only weak refs are held in maps private to the auditor.
>
> i will put that in hadoop common as i would want to use the same code in
> thread-levek IOStatistics tracking.
> there we;d demand create an IOStatistics snapshot per thread,  short lived
> worker threads for stream io would still update the stats of the thread the
> stream was created in. this will let lus collect stats on store io through
> the orc/paquet readers for each thread doing work for a job, and include
> them in job stats.
>
> and how would that be useful? well. look at this coimparison of job/task
> commit performance with the manifest committer
> https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks for the update Steve!

Mukund: could you please share the command to get the error above? I tried
a few approaches but couldn't reproduce it :(

Thanks again!

Best,
Chao

On Mon, Jan 24, 2022 at 7:16 AM Steve Loughran <st...@cloudera.com> wrote:

>
> fix is in t disable auditing, which is now the default
> https://issues.apache.org/jira/browse/HADOOP-18094
>
> everything is OK for apps which retain the same fs instances for the life
> of the app, but not for Hive...
>
> will do a better fix ASAP where in exchange for loss of auditing after a
> GC event, only weak refs are held in maps private to the auditor.
>
> i will put that in hadoop common as i would want to use the same code in
> thread-levek IOStatistics tracking.
> there we;d demand create an IOStatistics snapshot per thread,  short lived
> worker threads for stream io would still update the stats of the thread the
> stream was created in. this will let lus collect stats on store io through
> the orc/paquet readers for each thread doing work for a job, and include
> them in job stats.
>
> and how would that be useful? well. look at this coimparison of job/task
> commit performance with the manifest committer
> https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
fix is in t disable auditing, which is now the default
https://issues.apache.org/jira/browse/HADOOP-18094

everything is OK for apps which retain the same fs instances for the life
of the app, but not for Hive...

will do a better fix ASAP where in exchange for loss of auditing after a GC
event, only weak refs are held in maps private to the auditor.

i will put that in hadoop common as i would want to use the same code in
thread-levek IOStatistics tracking.
there we;d demand create an IOStatistics snapshot per thread,  short lived
worker threads for stream io would still update the stats of the thread the
stream was created in. this will let lus collect stats on store io through
the orc/paquet readers for each thread doing work for a job, and include
them in job stats.

and how would that be useful? well. look at this coimparison of job/task
commit performance with the manifest committer
https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
fix is in t disable auditing, which is now the default
https://issues.apache.org/jira/browse/HADOOP-18094

everything is OK for apps which retain the same fs instances for the life
of the app, but not for Hive...

will do a better fix ASAP where in exchange for loss of auditing after a GC
event, only weak refs are held in maps private to the auditor.

i will put that in hadoop common as i would want to use the same code in
thread-levek IOStatistics tracking.
there we;d demand create an IOStatistics snapshot per thread,  short lived
worker threads for stream io would still update the stats of the thread the
stream was created in. this will let lus collect stats on store io through
the orc/paquet readers for each thread doing work for a job, and include
them in job stats.

and how would that be useful? well. look at this coimparison of job/task
commit performance with the manifest committer
https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
fix is in t disable auditing, which is now the default
https://issues.apache.org/jira/browse/HADOOP-18094

everything is OK for apps which retain the same fs instances for the life
of the app, but not for Hive...

will do a better fix ASAP where in exchange for loss of auditing after a GC
event, only weak refs are held in maps private to the auditor.

i will put that in hadoop common as i would want to use the same code in
thread-levek IOStatistics tracking.
there we;d demand create an IOStatistics snapshot per thread,  short lived
worker threads for stream io would still update the stats of the thread the
stream was created in. this will let lus collect stats on store io through
the orc/paquet readers for each thread doing work for a job, and include
them in job stats.

and how would that be useful? well. look at this coimparison of job/task
commit performance with the manifest committer
https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
fix is in t disable auditing, which is now the default
https://issues.apache.org/jira/browse/HADOOP-18094

everything is OK for apps which retain the same fs instances for the life
of the app, but not for Hive...

will do a better fix ASAP where in exchange for loss of auditing after a GC
event, only weak refs are held in maps private to the auditor.

i will put that in hadoop common as i would want to use the same code in
thread-levek IOStatistics tracking.
there we;d demand create an IOStatistics snapshot per thread,  short lived
worker threads for stream io would still update the stats of the thread the
stream was created in. this will let lus collect stats on store io through
the orc/paquet readers for each thread doing work for a job, and include
them in job stats.

and how would that be useful? well. look at this coimparison of job/task
commit performance with the manifest committer
https://gist.github.com/steveloughran/7dc1e68220db67327b781b345b42c0b8

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all for taking a look!

> now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references

Thanks Steve for discovering this issue. I'll cancel this RC then, and
start a new one once HADOOP-18091 is fixed. Also still looking at the issue
that Mukund mentioned earlier.

Chao

On Sat, Jan 22, 2022 at 7:44 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> `now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references
>
> surfaces in processes with long lived threads creating and destroying many
> s3a FS instances.
>
> working on a fix right now
>
> On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
> wrote:
>
> > +1 (binding)
> >
> > - Built from source
> >
> > - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and
> 3
> > NMs
> >
> > - Validated inter- and intra-queue preemption
> >
> > - Validated exclusive node labels
> >
> > Thanks a lot Chao for your diligence and hard work on this release.
> >
> > Eric
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> > sunchao@apache.org> wrote:
> >
> >
> >
> >
> >
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
> >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all for taking a look!

> now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references

Thanks Steve for discovering this issue. I'll cancel this RC then, and
start a new one once HADOOP-18091 is fixed. Also still looking at the issue
that Mukund mentioned earlier.

Chao

On Sat, Jan 22, 2022 at 7:44 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> `now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references
>
> surfaces in processes with long lived threads creating and destroying many
> s3a FS instances.
>
> working on a fix right now
>
> On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
> wrote:
>
> > +1 (binding)
> >
> > - Built from source
> >
> > - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and
> 3
> > NMs
> >
> > - Validated inter- and intra-queue preemption
> >
> > - Validated exclusive node labels
> >
> > Thanks a lot Chao for your diligence and hard work on this release.
> >
> > Eric
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> > sunchao@apache.org> wrote:
> >
> >
> >
> >
> >
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
> >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all for taking a look!

> now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references

Thanks Steve for discovering this issue. I'll cancel this RC then, and
start a new one once HADOOP-18091 is fixed. Also still looking at the issue
that Mukund mentioned earlier.

Chao

On Sat, Jan 22, 2022 at 7:44 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> `now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references
>
> surfaces in processes with long lived threads creating and destroying many
> s3a FS instances.
>
> working on a fix right now
>
> On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
> wrote:
>
> > +1 (binding)
> >
> > - Built from source
> >
> > - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and
> 3
> > NMs
> >
> > - Validated inter- and intra-queue preemption
> >
> > - Validated exclusive node labels
> >
> > Thanks a lot Chao for your diligence and hard work on this release.
> >
> > Eric
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> > sunchao@apache.org> wrote:
> >
> >
> >
> >
> >
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
> >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Chao Sun <su...@apache.org>.
Thanks all for taking a look!

> now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references

Thanks Steve for discovering this issue. I'll cancel this RC then, and
start a new one once HADOOP-18091 is fixed. Also still looking at the issue
that Mukund mentioned earlier.

Chao

On Sat, Jan 22, 2022 at 7:44 AM Steve Loughran <st...@cloudera.com.invalid>
wrote:

> `now some bad news
> https://issues.apache.org/jira/browse/HADOOP-18091
> S3A auditing leaks memory through ThreadLocal references
>
> surfaces in processes with long lived threads creating and destroying many
> s3a FS instances.
>
> working on a fix right now
>
> On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
> wrote:
>
> > +1 (binding)
> >
> > - Built from source
> >
> > - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and
> 3
> > NMs
> >
> > - Validated inter- and intra-queue preemption
> >
> > - Validated exclusive node labels
> >
> > Thanks a lot Chao for your diligence and hard work on this release.
> >
> > Eric
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> > sunchao@apache.org> wrote:
> >
> >
> >
> >
> >
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC2 below:
> >
> > The RC is available at:
> > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > I've done the following tests and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> > For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
> >
> >
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
`now some bad news
https://issues.apache.org/jira/browse/HADOOP-18091
S3A auditing leaks memory through ThreadLocal references

surfaces in processes with long lived threads creating and destroying many
s3a FS instances.

working on a fix right now

On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
wrote:

> +1 (binding)
>
> - Built from source
>
> - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3
> NMs
>
> - Validated inter- and intra-queue preemption
>
> - Validated exclusive node labels
>
> Thanks a lot Chao for your diligence and hard work on this release.
>
> Eric
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> sunchao@apache.org> wrote:
>
>
>
>
>
> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
>
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
`now some bad news
https://issues.apache.org/jira/browse/HADOOP-18091
S3A auditing leaks memory through ThreadLocal references

surfaces in processes with long lived threads creating and destroying many
s3a FS instances.

working on a fix right now

On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
wrote:

> +1 (binding)
>
> - Built from source
>
> - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3
> NMs
>
> - Validated inter- and intra-queue preemption
>
> - Validated exclusive node labels
>
> Thanks a lot Chao for your diligence and hard work on this release.
>
> Eric
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> sunchao@apache.org> wrote:
>
>
>
>
>
> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
>
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
`now some bad news
https://issues.apache.org/jira/browse/HADOOP-18091
S3A auditing leaks memory through ThreadLocal references

surfaces in processes with long lived threads creating and destroying many
s3a FS instances.

working on a fix right now

On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
wrote:

> +1 (binding)
>
> - Built from source
>
> - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3
> NMs
>
> - Validated inter- and intra-queue preemption
>
> - Validated exclusive node labels
>
> Thanks a lot Chao for your diligence and hard work on this release.
>
> Eric
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> sunchao@apache.org> wrote:
>
>
>
>
>
> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
>
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
`now some bad news
https://issues.apache.org/jira/browse/HADOOP-18091
S3A auditing leaks memory through ThreadLocal references

surfaces in processes with long lived threads creating and destroying many
s3a FS instances.

working on a fix right now

On Fri, 21 Jan 2022 at 21:02, Eric Payne <er...@yahoo.com.invalid>
wrote:

> +1 (binding)
>
> - Built from source
>
> - Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3
> NMs
>
> - Validated inter- and intra-queue preemption
>
> - Validated exclusive node labels
>
> Thanks a lot Chao for your diligence and hard work on this release.
>
> Eric
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <
> sunchao@apache.org> wrote:
>
>
>
>
>
> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org
>
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Eric Payne <er...@yahoo.com.INVALID>.
+1 (binding)

- Built from source

- Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3 NMs

- Validated inter- and intra-queue preemption

- Validated exclusive node labels

Thanks a lot Chao for your diligence and hard work on this release.

Eric















On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <su...@apache.org> wrote: 





Hi all,

I've put together Hadoop 3.3.2 RC2 below:

The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
The RC tag is at:
https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
The Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1332

You can find my public key at:
https://downloads.apache.org/hadoop/common/KEYS

I've done the following tests and they look good:
- Ran all the unit tests
- Started a single node HDFS cluster and tested a few simple commands
- Ran all the tests in Spark using the RC2 artifacts

Please evaluate the RC and vote, thanks!

Best,
Chao

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-help@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
*+1 binding.*

reviewed binaries, source, artifacts in the staging maven repository in
downstream builds. all good.

*## test run*

checked out the asf github repo at commit 6da346a358c into a location
already set up with aws and azure test credentials

ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
 -Dmarkers=delete -Dscale
and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
-DtestsThreadCount=6

all happy



*## binary*
downloaded KEYS and imported, so adding your key to my list (also signed
this and updated the key servers)

downloaded rc tar and verified
```
> gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <su...@apache.org>"
[full]


> cat hadoop-3.3.2.tar.gz.sha512
SHA512 (hadoop-3.3.2.tar.gz) =
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d

> shasum -a 512 hadoop-3.3.2.tar.gz
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
 hadoop-3.3.2.tar.gz
```


*# cloudstore against staged artifacts*
```
cd ~/.m2/repository/org/apache/hadoop
find . -name \*3.3.2\* -print | xargs rm -r
```
ensures no local builds have tainted the repo.

in cloudstore mvn build without tests
```
mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
```
this fetches all from asf staging

```
Downloading from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
Downloaded from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
(11 kB at 20 kB/s)
```
there's no tests there, but it did audit the download process. FWIW, that
project has switched to logback, so I now have all hadoop imports excluding
slf4j and log4j. it takes too much effort right now.

build works.

tested abfs and s3a storediags, all happy




*### google GCS against staged artifacts*

gcs is now java 11 only, so I had to switch JVMs here.

had to add a snapshots and staging profile, after which I could build and
test.

```
 -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
```
two test failures were related to auth failures where the tests were trying
to raise exceptions but things failed differently
```
[ERROR] Failures:
[ERROR]
GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
unexpected exception type thrown; expected:<java.io.FileNotFoundException>
but was:<java.lang.IllegalArgumentException>
[ERROR]
GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
value of: throwable.getMessage()
expected: Failed to create GCS FS
but was : A JSON key file may not be specified at the same time as
credentials via configuration.

```

I'm not worried here.

ran cloudstore's diagnostics against gcs.

Nice to see they are now collecting IOStatistics on their input streams. we
really need to get this collected through the parquet/orc libs and then
through the query engines.

```
> bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/

...
2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
(StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d{counters=((stream_read_close_operations=1)
(stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
(stream_read_bytes=7) (stream_read_exceptions=0)
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_forward_operations=0)
(stream_read_operations_incomplete=1));
gauges=();
minimums=();
maximums=();
means=();
}
...
```

*### source*

once I'd done builds and tests which fetched from staging, I did a local
build and test

repeated download/validate of source tarball, unzip/untar

build with java11.

I've not done the test run there, because that directory tree doesn't have
the credentials, and this mornings run was good.

altogether then: very happy. tests good, downstream libraries building and
linking.

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
*+1 binding.*

reviewed binaries, source, artifacts in the staging maven repository in
downstream builds. all good.

*## test run*

checked out the asf github repo at commit 6da346a358c into a location
already set up with aws and azure test credentials

ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
 -Dmarkers=delete -Dscale
and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
-DtestsThreadCount=6

all happy



*## binary*
downloaded KEYS and imported, so adding your key to my list (also signed
this and updated the key servers)

downloaded rc tar and verified
```
> gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <su...@apache.org>"
[full]


> cat hadoop-3.3.2.tar.gz.sha512
SHA512 (hadoop-3.3.2.tar.gz) =
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d

> shasum -a 512 hadoop-3.3.2.tar.gz
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
 hadoop-3.3.2.tar.gz
```


*# cloudstore against staged artifacts*
```
cd ~/.m2/repository/org/apache/hadoop
find . -name \*3.3.2\* -print | xargs rm -r
```
ensures no local builds have tainted the repo.

in cloudstore mvn build without tests
```
mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
```
this fetches all from asf staging

```
Downloading from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
Downloaded from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
(11 kB at 20 kB/s)
```
there's no tests there, but it did audit the download process. FWIW, that
project has switched to logback, so I now have all hadoop imports excluding
slf4j and log4j. it takes too much effort right now.

build works.

tested abfs and s3a storediags, all happy




*### google GCS against staged artifacts*

gcs is now java 11 only, so I had to switch JVMs here.

had to add a snapshots and staging profile, after which I could build and
test.

```
 -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
```
two test failures were related to auth failures where the tests were trying
to raise exceptions but things failed differently
```
[ERROR] Failures:
[ERROR]
GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
unexpected exception type thrown; expected:<java.io.FileNotFoundException>
but was:<java.lang.IllegalArgumentException>
[ERROR]
GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
value of: throwable.getMessage()
expected: Failed to create GCS FS
but was : A JSON key file may not be specified at the same time as
credentials via configuration.

```

I'm not worried here.

ran cloudstore's diagnostics against gcs.

Nice to see they are now collecting IOStatistics on their input streams. we
really need to get this collected through the parquet/orc libs and then
through the query engines.

```
> bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/

...
2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
(StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d{counters=((stream_read_close_operations=1)
(stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
(stream_read_bytes=7) (stream_read_exceptions=0)
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_forward_operations=0)
(stream_read_operations_incomplete=1));
gauges=();
minimums=();
maximums=();
means=();
}
...
```

*### source*

once I'd done builds and tests which fetched from staging, I did a local
build and test

repeated download/validate of source tarball, unzip/untar

build with java11.

I've not done the test run there, because that directory tree doesn't have
the credentials, and this mornings run was good.

altogether then: very happy. tests good, downstream libraries building and
linking.

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Eric Payne <er...@yahoo.com.INVALID>.
+1 (binding)

- Built from source

- Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3 NMs

- Validated inter- and intra-queue preemption

- Validated exclusive node labels

Thanks a lot Chao for your diligence and hard work on this release.

Eric















On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <su...@apache.org> wrote: 





Hi all,

I've put together Hadoop 3.3.2 RC2 below:

The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
The RC tag is at:
https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
The Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1332

You can find my public key at:
https://downloads.apache.org/hadoop/common/KEYS

I've done the following tests and they look good:
- Ran all the unit tests
- Started a single node HDFS cluster and tested a few simple commands
- Ran all the tests in Spark using the RC2 artifacts

Please evaluate the RC and vote, thanks!

Best,
Chao

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: yarn-dev-help@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Eric Payne <er...@yahoo.com.INVALID>.
+1 (binding)

- Built from source

- Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3 NMs

- Validated inter- and intra-queue preemption

- Validated exclusive node labels

Thanks a lot Chao for your diligence and hard work on this release.

Eric















On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <su...@apache.org> wrote: 





Hi all,

I've put together Hadoop 3.3.2 RC2 below:

The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
The RC tag is at:
https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
The Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1332

You can find my public key at:
https://downloads.apache.org/hadoop/common/KEYS

I've done the following tests and they look good:
- Ran all the unit tests
- Started a single node HDFS cluster and tested a few simple commands
- Ran all the tests in Spark using the RC2 artifacts

Please evaluate the RC and vote, thanks!

Best,
Chao

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Eric Payne <er...@yahoo.com.INVALID>.
+1 (binding)

- Built from source

- Brought up a non-secure virtual cluster w/ NN, 1 DN, RM, AHS, JHS, and 3 NMs

- Validated inter- and intra-queue preemption

- Validated exclusive node labels

Thanks a lot Chao for your diligence and hard work on this release.

Eric















On Wednesday, January 19, 2022, 11:50:34 AM CST, Chao Sun <su...@apache.org> wrote: 





Hi all,

I've put together Hadoop 3.3.2 RC2 below:

The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
The RC tag is at:
https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
The Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1332

You can find my public key at:
https://downloads.apache.org/hadoop/common/KEYS

I've done the following tests and they look good:
- Ran all the unit tests
- Started a single node HDFS cluster and tested a few simple commands
- Ran all the tests in Spark using the RC2 artifacts

Please evaluate the RC and vote, thanks!

Best,
Chao

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
*+1 binding.*

reviewed binaries, source, artifacts in the staging maven repository in
downstream builds. all good.

*## test run*

checked out the asf github repo at commit 6da346a358c into a location
already set up with aws and azure test credentials

ran the hadoop-aws tests with -Dparallel-tests -DtestsThreadCount=6
 -Dmarkers=delete -Dscale
and hadoop-azure against azure cardiff with -Dparallel-tests=abfs
-DtestsThreadCount=6

all happy



*## binary*
downloaded KEYS and imported, so adding your key to my list (also signed
this and updated the key servers)

downloaded rc tar and verified
```
> gpg2 --verify hadoop-3.3.2.tar.gz.asc hadoop-3.3.2.tar.gz
gpg: Signature made Sat Jan 15 23:41:10 2022 GMT
gpg:                using RSA key DE7FA241EB298D027C97B2A1D8F1A97BE51ECA98
gpg: Good signature from "Chao Sun (CODE SIGNING KEY) <su...@apache.org>"
[full]


> cat hadoop-3.3.2.tar.gz.sha512
SHA512 (hadoop-3.3.2.tar.gz) =
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d

> shasum -a 512 hadoop-3.3.2.tar.gz
cdd3d9298ba7d6e63ed63f93c159729ea14d2b7d5e3a0640b1761c86c7714a721f88bdfa8cb1d8d3da316f616e4f0ceaace4f32845ee4441e6aaa7a12b8c647d
 hadoop-3.3.2.tar.gz
```


*# cloudstore against staged artifacts*
```
cd ~/.m2/repository/org/apache/hadoop
find . -name \*3.3.2\* -print | xargs rm -r
```
ensures no local builds have tainted the repo.

in cloudstore mvn build without tests
```
mci -Pextra -Phadoop-3.3.2 -Psnapshots-and-staging
```
this fetches all from asf staging

```
Downloading from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
Downloaded from ASF Staging:
https://repository.apache.org/content/groups/staging/org/apache/hadoop/hadoop-client/3.3.2/hadoop-client-3.3.2.pom
(11 kB at 20 kB/s)
```
there's no tests there, but it did audit the download process. FWIW, that
project has switched to logback, so I now have all hadoop imports excluding
slf4j and log4j. it takes too much effort right now.

build works.

tested abfs and s3a storediags, all happy




*### google GCS against staged artifacts*

gcs is now java 11 only, so I had to switch JVMs here.

had to add a snapshots and staging profile, after which I could build and
test.

```
 -Dhadoop.three.version=3.3.2 -Psnapshots-and-staging
```
two test failures were related to auth failures where the tests were trying
to raise exceptions but things failed differently
```
[ERROR] Failures:
[ERROR]
GoogleHadoopFileSystemTest.eagerInitialization_fails_withInvalidCredentialsConfiguration:122
unexpected exception type thrown; expected:<java.io.FileNotFoundException>
but was:<java.lang.IllegalArgumentException>
[ERROR]
GoogleHadoopFileSystemTest.lazyInitialization_deleteCall_fails_withInvalidCredentialsConfiguration:100
value of: throwable.getMessage()
expected: Failed to create GCS FS
but was : A JSON key file may not be specified at the same time as
credentials via configuration.

```

I'm not worried here.

ran cloudstore's diagnostics against gcs.

Nice to see they are now collecting IOStatistics on their input streams. we
really need to get this collected through the parquet/orc libs and then
through the query engines.

```
> bin/hadoop jar $CLOUDSTORE storediag gs://stevel-london/

...
2022-01-20 17:52:47,447 [main] INFO  diag.StoreDiag
(StoreDurationInfo.java:<init>(56)) - Starting: Reading a file
gs://stevel-london/dir-9cbfc774-76ff-49c0-b216-d7800369c3e1/file
input stream summary: org.apache.hadoop.fs.FSDataInputStream@6cfd9a54:
com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream@78c1372d{counters=((stream_read_close_operations=1)
(stream_read_seek_backward_operations=0) (stream_read_total_bytes=7)
(stream_read_bytes=7) (stream_read_exceptions=0)
(stream_read_seek_operations=0) (stream_read_seek_bytes_skipped=0)
(stream_read_operations=3) (stream_read_bytes_backwards_on_seek=0)
(stream_read_seek_forward_operations=0)
(stream_read_operations_incomplete=1));
gauges=();
minimums=();
maximums=();
means=();
}
...
```

*### source*

once I'd done builds and tests which fetched from staging, I did a local
build and test

repeated download/validate of source tarball, unzip/untar

build with java11.

I've not done the test run there, because that directory tree doesn't have
the credentials, and this mornings run was good.

altogether then: very happy. tests good, downstream libraries building and
linking.

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Steve Loughran <st...@cloudera.com.INVALID>.
thanks, i'm on it..will run the aws and azure tests and then play with the
artifacts

On Wed, 19 Jan 2022 at 17:50, Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

Posted by Stack <st...@duboce.net>.
+1 (binding)

        * Signature: ok
        * Checksum : ok
        * Rat check (1.8.0_191): ok
         - mvn clean apache-rat:check
        * Built from source (1.8.0_191): ok
         - mvn clean install  -DskipTests

Poking around in the binary, it looks good. Unpacked site. Looks right.
Checked a few links work.

Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours w/
chaos. Worked like 3.3.1...

I tried to build with 3.8.1 maven and got the below.

[ERROR] Failed to execute goal on project
hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies for
project
org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2: Failed
to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
for org.restlet.
jee:org.restlet:jar:2.3.0: Could not transfer artifact
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
http://maven.restlet.org, default, releases+snapshots), apache.snapshots (
http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]

I used 3.6.3 mvn instead (looks like a simple fix).

Thanks for packaging up this fat point release Chao Sun.

S

On Wed, Jan 19, 2022 at 9:50 AM Chao Sun <su...@apache.org> wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC2 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> I've done the following tests and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>