You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by Denys Kuzmenko <dk...@cloudera.com.INVALID> on 2022/10/25 11:20:23 UTC

[VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Hi team,


Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/


The checksums are these:
- 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
apache-hive-4.0.0-alpha-2-bin.tar.gz
- 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
apache-hive-4.0.0-alpha-2-src.tar.gz

Maven artifacts are available
here:https://repository.apache.org/content/repositories/orgapachehive-1117/

The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
this release in github, you can see it at
https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0

The git commit hash
is:https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c

Voting will conclude in 72 hours.

Hive PMC Members: Please test and vote.

Thanks

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Stamatis Zampetakis <za...@gmail.com>.
Thanks for pushing this forward Denys.

A few general comments regarding the procedure.

Every time the artifacts, sources, hashes, or something significant changes
previous casted votes are cancelled. It is usually easier to track this
down by cancelling the RC and starting another.

For traceability reasons, it is also helpful to send an explicit
[CANCEL][VOTE] email [1, 2, 3, 4] when the vote is unsuccessful (preferably
as a new thread).

It is not strictly necessary to cancel the vote after 72h.
The ASF release policy [5] requires the vote to be open for at least 72h
but does not specify when exactly it should be closed after that.
It's usually up to the release manager to decide and extend the duration if
necessary.

Best,
Stamatis

[1] https://lists.apache.org/thread/0501xbk1hvb46gy0w8ts6g5ttw7crssl
[2] https://lists.apache.org/thread/qzt7mgxjloh4841pvcdoz707bfxd4wk2
[3] https://lists.apache.org/thread/64foh1w7xwv9vs8m8grb23sc9f8h2bct
[4] https://lists.apache.org/thread/t09zfwfbjzon9hdv11smyyfydgx0m8zg
[5] https://www.apache.org/legal/release-policy.html#release-approval

On Sat, Oct 29, 2022, 8:32 PM Denys Kuzmenko <dk...@cloudera.com.invalid>
wrote:

> Hi team,
>
> Thank you for taking time to verify this RC!
> Unfortunately, we didn't get enough votes to go ahead with the release.
>
> Closing this vote as unsuccessful.
>
> Kind regards,
> Denys
>
> On Fri, Oct 28, 2022, 15:56 Stamatis Zampetakis <za...@gmail.com> wrote:
>
> > I think that having a proper NOTICE file in jars is important to comply
> > with the ASF release policy:
> > *
> https://www.apache.org/legal/release-policy.html#licensing-documentation
> > * https://www.apache.org/legal/src-headers.html#notice
> > * https://www.apache.org/legal/src-headers.html#faq-binaries
> > The fact that the NOTICE wasn't updated in alpha-1 is most likely an
> > oversight.
> >
> > Having said that the final decision is up to the release manager.
> >
> > Best,
> > Stamatis
> >
> > On Fri, Oct 28, 2022 at 1:57 PM Denys Kuzmenko
> > <dk...@cloudera.com.invalid> wrote:
> >
> > > Hi Stamatis,
> > >
> > > My bad, sorry. Removed the ".imp" files and updated the release
> > artifacts.*
> > > *** NO CODE CHANGES ****
> > > I was following the alpha-1 release and the NOTICE wasn't updated there
> > as
> > > well. I don't think that should be a blocker. Noted that + javadoc
> > > artifacts for the new RC.
> > >
> > > fc7908f40ec854671c6795acb525649d83c071d70cf62961dc90a251a0f45e47
> > >  apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > f2814aadeca56ad1d8d9f7797b99d1670f6450f68ff6cae829384c9c102cd7a9
> > >  apache-hive-4.0.0-alpha-2-src.tar.gz
> > >
> > > Thanks,
> > > Denys
> > >
> > > On Fri, Oct 28, 2022 at 12:28 PM Stamatis Zampetakis <
> zabetak@gmail.com>
> > > wrote:
> > >
> > > > -1 (non-binding)
> > > >
> > > > Ubuntu 20.04.5 LTS, java version "1.8.0_261", Apache Maven 3.6.3
> > > >
> > > > * Verified signatures and checksums OK
> > > > * Checked diff between git repo and release sources (diff -qr
> hive-git
> > > > hive-src) KO (among other *.iml files present in release sources but
> > not
> > > in
> > > > git)
> > > > * Checked LICENSE, NOTICE, and README.md file OK
> > > > * Built from release sources (mvn clean install -DskipTests -Pitests)
> > OK
> > > > * Package binaries from release sources (mvn clean package
> -DskipTests)
> > > OK
> > > > * Built from git tag (mvn clean install -DskipTests -Pitests) OK
> > > > * Run smoke tests on pseudo cluster using hive-dev-box [1] OK
> > > > * Spot check maven artifacts for general structure, LICENSE, NOTICE,
> > > > META-INF content KO (NOTICE file in hive-exec-4.0.0-alpha-2.jar has
> > > > copyright for 2020)
> > > >
> > > > Smoke tests included: * Derby metastore initialization * simple
> CREATE
> > > > TABLE statements; * basic INSERT INTO VALUES statements; * basic
> SELECT
> > > > statements with simple INNER JOIN, WHERE, and GROUP BY variations; *
> > > > EXPLAIN statement variations; * ANALYZE TABLE variations;
> > > >
> > > > The negative vote is for the spurious *.iml (IntelliJ project) files
> > > > present in the release sources and the outdated NOTICE file in maven
> > > > artifacts).
> > > >
> > > > Also javadoc artifacts are missing from maven staging repo. I checked
> > > > previous releases and it seems that they were not there as well so
> this
> > > is
> > > > not blocking but may be worth fixing for the next release.
> > > >
> > > > Best,
> > > > Stamatis
> > > >
> > > > [1] https://lists.apache.org/thread/7yqs7o6ncpottqx8txt0dtt9858ypsbb
> > > >
> > > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapachehive-1117/org/apache/hive/hive-exec/4.0.0-alpha-2/hive-exec-4.0.0-alpha-2.jar
> > > >
> > > > On Fri, Oct 28, 2022 at 10:32 AM Ayush Saxena <ay...@gmail.com>
> > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > > * Built from source.
> > > > > * Verified Checksums.
> > > > > * Verified Signatures
> > > > > * Ran some basic unit tests.
> > > > > * Ran some basic ACID & Iceberg related queries with Tez.
> > > > > * Skimmed through the Maven Artifacts, Looks Good.
> > > > >
> > > > > Thanx Denys for driving the release. Good Luck!!!
> > > > >
> > > > > -Ayush
> > > > >
> > > > > On Fri, 28 Oct 2022 at 13:46, Denys Kuzmenko <
> dkuzmenko@cloudera.com
> > > > > .invalid>
> > > > > wrote:
> > > > >
> > > > > > Extending voting for 24hr. 1 more +1 is needed from the PMC to
> > > promote
> > > > > the
> > > > > > release.
> > > > > > If not given, I'll be closing this vote as unsuccessful.
> > > > > >
> > > > > > On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <
> > cnauroth@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > +1 (non-binding)
> > > > > > >
> > > > > > > * Verified all checksums.
> > > > > > > * Verified all signatures.
> > > > > > > * Built from source.
> > > > > > >     * mvn clean install -Piceberg -DskipTests
> > > > > > > * Tests passed.
> > > > > > >     * mvn --fail-never clean verify -Piceberg -Pitests
> > > > > > > -Dmaven.test.jvm.args='-Xmx2048m
> -DJETTY_AVAILABLE_PROCESSORS=4'
> > > > > > >
> > > > > > > I figured out why my test runs were failing in HTTP server
> > > > > > initialization.
> > > > > > > Jetty enforces thread leasing to warn or abort if there aren't
> > > enough
> > > > > > > threads available [1]. During startup, it attempts to lease a
> > > thread
> > > > > per
> > > > > > > NIO selector [2]. By default, the number of NIO selectors to
> use
> > is
> > > > > > > determined based on available CPUs [3]. This is mostly a
> > > passthrough
> > > > to
> > > > > > > Runtime.availableProcessors() [4]. In my case, running on a
> > machine
> > > > > with
> > > > > > 16
> > > > > > > CPUs, this ended up creating more than 4 selectors, therefore
> > > > requiring
> > > > > > > more than 4 threads and violating the lease check. I was able
> to
> > > work
> > > > > > > around this by passing the JETTY_AVAILABLE_PROCESSORS system
> > > property
> > > > > to
> > > > > > > constrain the number of CPUs available to Jetty.
> > > > > > >
> > > > > > > If we are intentionally constraining the pool to 4 threads
> during
> > > > > itests,
> > > > > > > then would it also make sense to limit
> JETTY_AVAILABLE_PROCESSORS
> > > in
> > > > > > > maven.test.jvm.args of the root pom.xml, so that others don't
> run
> > > > into
> > > > > > this
> > > > > > > problem later? If so, I'll send a pull request.
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> > > > > > > [2]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> > > > > > > [3]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> > > > > > > [4]
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
> > > > > > >
> > > > > > > Chris Nauroth
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> > > > > > > alessandro.solimando@gmail.com> wrote:
> > > > > > >
> > > > > > > > You are right Ayush, I got sidetracked by the release notes
> (*
> > > > > > > [HIVE-19217]
> > > > > > > > - Upgrade to Hadoop 3.1.0) and I did not check the versions
> in
> > > the
> > > > > pom
> > > > > > > > file, apologies for the false alarm but better safe than
> sorry.
> > > > > > > >
> > > > > > > > With the right versions in place (Hadoop 3.3.1 and Tez
> 10.0.2),
> > > > tests
> > > > > > > > including select, join, groupby, orderby, explain (ast, cbo,
> > cbo
> > > > > cost,
> > > > > > > > vectorization) are working correctly, against data in ORC and
> > > > parquet
> > > > > > > > file format.
> > > > > > > >
> > > > > > > > No problem for me either when running
> TestBeelinePasswordOption
> > > > > > locally.
> > > > > > > >
> > > > > > > > So my vote turns into a +1 (non-binding).
> > > > > > > >
> > > > > > > > Thanks a lot Denys for pushing the release process forward,
> > sorry
> > > > > again
> > > > > > > you
> > > > > > > > all for the oversight!
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > > Alessandro
> > > > > > > >
> > > > > > > > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <
> ayushtkn@gmail.com
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Alessandro,
> > > > > > > > > From this:
> > > > > > > > >
> > > > > > > > > > $ sw hadoop 3.1.0
> > > > > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I guess you are using the wrong versions, The Hadoop
> version
> > to
> > > > be
> > > > > > used
> > > > > > > > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > > > > > > > >
> > > > > > > > > The error also seems to be coming from Hadoop code
> > > > > > > > >
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > The compareTo method in Hadoop was changed in HADOOP-16196,
> > > which
> > > > > > isn't
> > > > > > > > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > > > > > > > >
> > > > > > > > > Another stuff, TestBeelinePasswordOptions passes for me
> > inside
> > > > the
> > > > > > > source
> > > > > > > > > directory.
> > > > > > > > >
> > > > > > > > > [*INFO*]
> > > -------------------------------------------------------
> > > > > > > > >
> > > > > > > > > [*INFO*]  T E S T S
> > > > > > > > >
> > > > > > > > > [*INFO*]
> > > -------------------------------------------------------
> > > > > > > > >
> > > > > > > > > [*INFO*] Running
> > > > > org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > > > > >
> > > > > > > > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped:
> 0,
> > > > Time
> > > > > > > > elapsed:
> > > > > > > > > 18.264 s - in
> > > org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > > > > >
> > > > > > > > > -Ayush
> > > > > > > > >
> > > > > > > > > [1]
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > > > > > > > [2]
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > > > > > > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > > > > > > > >
> > > > > > > > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > > > > > > > alessandro.solimando@gmail.com> wrote:
> > > > > > > > >
> > > > > > > > > > Hi everyone,
> > > > > > > > > >
> > > > > > > > > > unfortunately my vote is -1 (although non-binding) due
> to a
> > > > > > classpath
> > > > > > > > > error
> > > > > > > > > > which prevents queries involving Tez to complete (all the
> > > > details
> > > > > > at
> > > > > > > > the
> > > > > > > > > > end of the email, apologies for the lengthy text but I
> > wanted
> > > > to
> > > > > > > > provide
> > > > > > > > > > all the context).
> > > > > > > > > >
> > > > > > > > > > - verified gpg signature: OK
> > > > > > > > > >
> > > > > > > > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > > > > > > > >
> > > > > > > > > > $ gpg --import KEYS
> > > > > > > > > >
> > > > > > > > > > ...
> > > > > > > > > >
> > > > > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > >
> > > > > > > > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > > > > > > > >
> > > > > > > > > > gpg:                using RSA key
> > > > > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > > > > >
> > > > > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > > > > >
> > > > > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING
> > KEY) <
> > > > > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > > > > >
> > > > > > > > > > gpg: WARNING: The key's User ID is not certified with a
> > > trusted
> > > > > > > > > signature!
> > > > > > > > > >
> > > > > > > > > > gpg:          There is no indication that the signature
> > > belongs
> > > > > to
> > > > > > > the
> > > > > > > > > > owner.
> > > > > > > > > >
> > > > > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9
> > 07C5
> > > > 682D
> > > > > > > AFC7
> > > > > > > > > 3125
> > > > > > > > > >
> > > > > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > >
> > > > > > > > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > > > > > > > >
> > > > > > > > > > gpg:                using RSA key
> > > > > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > > > > >
> > > > > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > > > > >
> > > > > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING
> > KEY) <
> > > > > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > > > > >
> > > > > > > > > > gpg: WARNING: The key's User ID is not certified with a
> > > trusted
> > > > > > > > > signature!
> > > > > > > > > >
> > > > > > > > > > gpg:          There is no indication that the signature
> > > belongs
> > > > > to
> > > > > > > the
> > > > > > > > > > owner.
> > > > > > > > > >
> > > > > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9
> > 07C5
> > > > 682D
> > > > > > > AFC7
> > > > > > > > > 3125
> > > > > > > > > >
> > > > > > > > > > (AFAIK, this warning is OK)
> > > > > > > > > >
> > > > > > > > > > - verified package checksum: OK
> > > > > > > > > >
> > > > > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256)
> > > > > <(shasum
> > > > > > -a
> > > > > > > > 256
> > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > > > > > > > >
> > > > > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256)
> > > > > <(shasum
> > > > > > -a
> > > > > > > > 256
> > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > > > > > > > >
> > > > > > > > > > - verified maven build (no tests): OK
> > > > > > > > > >
> > > > > > > > > > $ mvn clean install -DskipTests
> > > > > > > > > >
> > > > > > > > > > ...
> > > > > > > > > >
> > > > > > > > > > [INFO]
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > ------------------------------------------------------------------------
> > > > > > > > > >
> > > > > > > > > > [INFO] BUILD SUCCESS
> > > > > > > > > >
> > > > > > > > > > [INFO]
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > ------------------------------------------------------------------------
> > > > > > > > > >
> > > > > > > > > > [INFO] Total time:  04:31 min
> > > > > > > > > >
> > > > > > > > > > - checked release notes: OK
> > > > > > > > > >
> > > > > > > > > > - checked few modules in Nexus: OK
> > > > > > > > > >
> > > > > > > > > > - environment used:
> > > > > > > > > >
> > > > > > > > > > $ sw_vers
> > > > > > > > > >
> > > > > > > > > > ProductName: macOS
> > > > > > > > > >
> > > > > > > > > > ProductVersion: 11.6.8
> > > > > > > > > >
> > > > > > > > > > BuildVersion: 20G730
> > > > > > > > > >
> > > > > > > > > > $ mvn --version
> > > > > > > > > >
> > > > > > > > > > Apache Maven 3.8.1
> > (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > > > > > > > >
> > > > > > > > > > Maven home: .../.sdkman/candidates/maven/current
> > > > > > > > > >
> > > > > > > > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > > > > > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > > > > > > > >
> > > > > > > > > > Default locale: en_IE, platform encoding: UTF-8
> > > > > > > > > >
> > > > > > > > > > OS name: "mac os x", version: "10.16", arch: "x86_64",
> > > family:
> > > > > > "mac"
> > > > > > > > > >
> > > > > > > > > > $ java -version
> > > > > > > > > >
> > > > > > > > > > openjdk version "1.8.0_292"
> > > > > > > > > >
> > > > > > > > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build
> > > 1.8.0_292-b10)
> > > > > > > > > >
> > > > > > > > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10,
> > > mixed
> > > > > > mode)
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Testing in hive-dev-box (
> > > > > https://github.com/kgyrtkirk/hive-dev-box
> > > > > > ):
> > > > > > > > KO
> > > > > > > > > >
> > > > > > > > > > This is the setup I have used:
> > > > > > > > > >
> > > > > > > > > > $ sw hadoop 3.1.0
> > > > > > > > > >
> > > > > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > > > > >
> > > > > > > > > > $ sw hive
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > >
> > > > > > > > > > In what follows the test data and query I have tried,
> with
> > > > > > associated
> > > > > > > > > > stacktrace for the error. It seems a classpath issue,
> > > probably
> > > > > > there
> > > > > > > > are
> > > > > > > > > > multiple versions of the class ending up in the CP and
> the
> > > > > > > classloader
> > > > > > > > > > happened to load the “wrong one”.
> > > > > > > > > >
> > > > > > > > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > CREATE TABLE test_stats_c (a string, b int) STORED AS
> > > PARQUET;
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON
> > (t1.a =
> > > > > t2.a)
> > > > > > > > WHERE
> > > > > > > > > > > t1.b < 3 AND t2.b > 1;
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > INFO  : Completed compiling
> > > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > > > > Time taken:
> > > > > > > > > > > 4.171 seconds
> > > > > > > > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > > > > > > > INFO  : Executing
> > > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > > > > > > > SELECT * FROM test_sta
> > > > > > > > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE
> t1.b
> > <
> > > 3
> > > > > AND
> > > > > > > t2.b
> > > > > > > > > > 1
> > > > > > > > > > > INFO  : Query ID =
> > > > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > > > > INFO  : Total jobs = 1
> > > > > > > > > > > INFO  : Launching Job 1 out of 1
> > > > > > > > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > > > > > > > DEBUG : Task getting executed using mapred tag :
> > > > > > > > > > >
> > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > > > > > > > v
> > > > > > > > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > > > > INFO  : Tez session hasn't been created yet. Opening
> > > session
> > > > > > > > > > > DEBUG : No local resources to process (other than
> > > hive-exec)
> > > > > > > > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND
> > t2.b
> > > >
> > > > 1
> > > > > > > > > (Stage-1)
> > > > > > > > > > > DEBUG : DagInfo:
> {"context":"Hive","description":"SELECT
> > *
> > > > FROM
> > > > > > > > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > > > > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > > > > > > > DEBUG : Setting Tez DAG access for
> > > > > > > > > > >
> > > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > with
> > > > > > > > > > > viewAclStr
> > > > > > > > > > > ing=dev, modifyStr=dev
> > > > > > > > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction
> to
> > > > > > > > > > > 0.30000001192092896
> > > > > > > > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > > > > > > >
> > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b],
> > > > Dag
> > > > > > ID:
> > > > > > > > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > > > > > > > [application_1666888075798_0001]
> > > > > > > > > > > INFO  : Status: Running (Executing on YARN cluster with
> > App
> > > > id
> > > > > > > > > > > application_1666888075798_0001)
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > ERROR : Status: Failed
> > > > > > > > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > > > > > > > vertexId=vertex_1666888075798_0001_1_01,
> > > diagnostics=[Vertex
> > > > > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed
> due
> > > > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2
> initializer
> > > > > failed,
> > > > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > > >         at
> > > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > >
> > > > > > > > > >         at
> > java.security.AccessController.doPrivileged(Native
> > > > > > Method)
> > > > > > > > > > >         at
> > > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > >
> > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > > ]
> > > > > > > > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > diagnostics=[Vertex
> > > > > > > > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed
> due
> > > > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> initializer
> > > > > failed,
> > > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > > >         at
> > > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > >
> > > > > > > > > >         at
> > java.security.AccessController.doPrivileged(Native
> > > > > > Method)
> > > > > > > > > > >         at
> > > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > > ]
> > > > > > > > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE.
> > > > > > failedVertices:2
> > > > > > > > > > > killedVertices:0
> > > > > > > > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > > > > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex
> > failed,
> > > > > > > > > vertexName=Map
> > > > > > > > > > > 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > diagnostics=[Vertex
> > > > > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed
> due
> > > > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2
> initializer
> > > > > failed,
> > > > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > > >         at
> > > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > java.security.AccessController.doPrivileged(Native
> > > > > > > Method)
> > > > > > > > > > >         at
> > > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map
> > 1]
> > > > > > > > > killed/failed
> > > > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> > > initializer
> > > > > > > failed,
> > > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > > >         at
> > > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > java.security.AccessController.doPrivileged(Native
> > > > > > > Method)
> > > > > > > > > > >         at
> > > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE.
> > > failedVertices:2
> > > > > > > > > > > killedVertices:0
> > > > > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a
> t1
> > > > JOIN
> > > > > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND
> t2.b
> > >
> > > 1
> > > > > > > > > > > INFO  : Completed executing
> > > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > > > > Time taken: 6.983 seconds
> > > > > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a
> t1
> > > > JOIN
> > > > > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND
> t2.b
> > >
> > > 1
> > > > > > > > > > > Error: Error while compiling statement: FAILED:
> Execution
> > > > > Error,
> > > > > > > > return
> > > > > > > > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
> > > > Vertex
> > > > > > > > failed,
> > > > > > > > > > > vertexName=Map 2,
> > vertexId=vertex_1666888075798_0001_1_01,
> > > > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map
> > 2]
> > > > > > > > > killed/failed
> > > > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2
> > > initializer
> > > > > > > failed,
> > > > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > > >         at
> > > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > java.security.AccessController.doPrivileged(Native
> > > > > > > Method)
> > > > > > > > > > >         at
> > > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map
> > 1]
> > > > > > > > > killed/failed
> > > > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> > > initializer
> > > > > > > failed,
> > > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > > >
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > > >         at
> > > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > java.security.AccessController.doPrivileged(Native
> > > > > > > Method)
> > > > > > > > > > >         at
> > > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE.
> > > failedVertices:2
> > > > > > > > > > > killedVertices:0 (state=08S01,code=2)
> > > > > > > > > >
> > > > > > > > > >         at
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Best regards,
> > > > > > > > > > Alessandro
> > > > > > > > > >
> > > > > > > > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <
> > > ayushtkn@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Chris,
> > > > > > > > > > > The KEYS file is at:
> > > > > > > > > > > https://downloads.apache.org/hive/KEYS
> > > > > > > > > > >
> > > > > > > > > > > -Ayush
> > > > > > > > > > >
> > > > > > > > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <
> > > > > cnauroth@apache.org
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Could someone please point me toward the right KEYS
> > file
> > > to
> > > > > > > import
> > > > > > > > so
> > > > > > > > > > > that
> > > > > > > > > > > > I can verify signatures? Thanks!
> > > > > > > > > > > >
> > > > > > > > > > > > I'm seeing numerous test failures due to
> "Insufficient
> > > > > > configured
> > > > > > > > > > > threads"
> > > > > > > > > > > > while trying to start the HTTP server. One example is
> > > > > > > > > > > > TestBeelinePasswordOption. Is anyone else seeing
> this?
> > I
> > > > > > noticed
> > > > > > > > that
> > > > > > > > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > > > > > > > /data/conf/hive-site.xml. (The default in
> HiveConf.java
> > > is
> > > > > 50.)
> > > > > > > > > > > >
> > > > > > > > > > > > [INFO] Running
> > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1,
> Skipped:
> > 0,
> > > > > Time
> > > > > > > > > elapsed:
> > > > > > > > > > > > 11.742 s <<< FAILURE! - in
> > > > > > > > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > > > > [ERROR]
> > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > Time
> > > > > > > > > > elapsed:
> > > > > > > > > > > > 11.733 s  <<< ERROR!
> > > > > > > > > > > > org.apache.hive.service.ServiceException:
> > > > > > > > > > > java.lang.IllegalStateException:
> > > > > > > > > > > > Insufficient configured threads: required=4 < max=4
> for
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > > > > at
> > > > > > > > > >
> > > > > > >
> > > >
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > > > > > > > at
> > > > > org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > > > > > > > at
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Denys Kuzmenko <dk...@cloudera.com.INVALID>.
Hi team,

Thank you for taking time to verify this RC!
Unfortunately, we didn't get enough votes to go ahead with the release.

Closing this vote as unsuccessful.

Kind regards,
Denys

On Fri, Oct 28, 2022, 15:56 Stamatis Zampetakis <za...@gmail.com> wrote:

> I think that having a proper NOTICE file in jars is important to comply
> with the ASF release policy:
> * https://www.apache.org/legal/release-policy.html#licensing-documentation
> * https://www.apache.org/legal/src-headers.html#notice
> * https://www.apache.org/legal/src-headers.html#faq-binaries
> The fact that the NOTICE wasn't updated in alpha-1 is most likely an
> oversight.
>
> Having said that the final decision is up to the release manager.
>
> Best,
> Stamatis
>
> On Fri, Oct 28, 2022 at 1:57 PM Denys Kuzmenko
> <dk...@cloudera.com.invalid> wrote:
>
> > Hi Stamatis,
> >
> > My bad, sorry. Removed the ".imp" files and updated the release
> artifacts.*
> > *** NO CODE CHANGES ****
> > I was following the alpha-1 release and the NOTICE wasn't updated there
> as
> > well. I don't think that should be a blocker. Noted that + javadoc
> > artifacts for the new RC.
> >
> > fc7908f40ec854671c6795acb525649d83c071d70cf62961dc90a251a0f45e47
> >  apache-hive-4.0.0-alpha-2-bin.tar.gz
> > f2814aadeca56ad1d8d9f7797b99d1670f6450f68ff6cae829384c9c102cd7a9
> >  apache-hive-4.0.0-alpha-2-src.tar.gz
> >
> > Thanks,
> > Denys
> >
> > On Fri, Oct 28, 2022 at 12:28 PM Stamatis Zampetakis <za...@gmail.com>
> > wrote:
> >
> > > -1 (non-binding)
> > >
> > > Ubuntu 20.04.5 LTS, java version "1.8.0_261", Apache Maven 3.6.3
> > >
> > > * Verified signatures and checksums OK
> > > * Checked diff between git repo and release sources (diff -qr hive-git
> > > hive-src) KO (among other *.iml files present in release sources but
> not
> > in
> > > git)
> > > * Checked LICENSE, NOTICE, and README.md file OK
> > > * Built from release sources (mvn clean install -DskipTests -Pitests)
> OK
> > > * Package binaries from release sources (mvn clean package -DskipTests)
> > OK
> > > * Built from git tag (mvn clean install -DskipTests -Pitests) OK
> > > * Run smoke tests on pseudo cluster using hive-dev-box [1] OK
> > > * Spot check maven artifacts for general structure, LICENSE, NOTICE,
> > > META-INF content KO (NOTICE file in hive-exec-4.0.0-alpha-2.jar has
> > > copyright for 2020)
> > >
> > > Smoke tests included: * Derby metastore initialization * simple CREATE
> > > TABLE statements; * basic INSERT INTO VALUES statements; * basic SELECT
> > > statements with simple INNER JOIN, WHERE, and GROUP BY variations; *
> > > EXPLAIN statement variations; * ANALYZE TABLE variations;
> > >
> > > The negative vote is for the spurious *.iml (IntelliJ project) files
> > > present in the release sources and the outdated NOTICE file in maven
> > > artifacts).
> > >
> > > Also javadoc artifacts are missing from maven staging repo. I checked
> > > previous releases and it seems that they were not there as well so this
> > is
> > > not blocking but may be worth fixing for the next release.
> > >
> > > Best,
> > > Stamatis
> > >
> > > [1] https://lists.apache.org/thread/7yqs7o6ncpottqx8txt0dtt9858ypsbb
> > >
> > >
> >
> https://repository.apache.org/content/repositories/orgapachehive-1117/org/apache/hive/hive-exec/4.0.0-alpha-2/hive-exec-4.0.0-alpha-2.jar
> > >
> > > On Fri, Oct 28, 2022 at 10:32 AM Ayush Saxena <ay...@gmail.com>
> > wrote:
> > >
> > > > +1 (non-binding)
> > > > * Built from source.
> > > > * Verified Checksums.
> > > > * Verified Signatures
> > > > * Ran some basic unit tests.
> > > > * Ran some basic ACID & Iceberg related queries with Tez.
> > > > * Skimmed through the Maven Artifacts, Looks Good.
> > > >
> > > > Thanx Denys for driving the release. Good Luck!!!
> > > >
> > > > -Ayush
> > > >
> > > > On Fri, 28 Oct 2022 at 13:46, Denys Kuzmenko <dkuzmenko@cloudera.com
> > > > .invalid>
> > > > wrote:
> > > >
> > > > > Extending voting for 24hr. 1 more +1 is needed from the PMC to
> > promote
> > > > the
> > > > > release.
> > > > > If not given, I'll be closing this vote as unsuccessful.
> > > > >
> > > > > On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <
> cnauroth@apache.org>
> > > > > wrote:
> > > > >
> > > > > > +1 (non-binding)
> > > > > >
> > > > > > * Verified all checksums.
> > > > > > * Verified all signatures.
> > > > > > * Built from source.
> > > > > >     * mvn clean install -Piceberg -DskipTests
> > > > > > * Tests passed.
> > > > > >     * mvn --fail-never clean verify -Piceberg -Pitests
> > > > > > -Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'
> > > > > >
> > > > > > I figured out why my test runs were failing in HTTP server
> > > > > initialization.
> > > > > > Jetty enforces thread leasing to warn or abort if there aren't
> > enough
> > > > > > threads available [1]. During startup, it attempts to lease a
> > thread
> > > > per
> > > > > > NIO selector [2]. By default, the number of NIO selectors to use
> is
> > > > > > determined based on available CPUs [3]. This is mostly a
> > passthrough
> > > to
> > > > > > Runtime.availableProcessors() [4]. In my case, running on a
> machine
> > > > with
> > > > > 16
> > > > > > CPUs, this ended up creating more than 4 selectors, therefore
> > > requiring
> > > > > > more than 4 threads and violating the lease check. I was able to
> > work
> > > > > > around this by passing the JETTY_AVAILABLE_PROCESSORS system
> > property
> > > > to
> > > > > > constrain the number of CPUs available to Jetty.
> > > > > >
> > > > > > If we are intentionally constraining the pool to 4 threads during
> > > > itests,
> > > > > > then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS
> > in
> > > > > > maven.test.jvm.args of the root pom.xml, so that others don't run
> > > into
> > > > > this
> > > > > > problem later? If so, I'll send a pull request.
> > > > > >
> > > > > > [1]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> > > > > > [2]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> > > > > > [3]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> > > > > > [4]
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
> > > > > >
> > > > > > Chris Nauroth
> > > > > >
> > > > > >
> > > > > > On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> > > > > > alessandro.solimando@gmail.com> wrote:
> > > > > >
> > > > > > > You are right Ayush, I got sidetracked by the release notes (*
> > > > > > [HIVE-19217]
> > > > > > > - Upgrade to Hadoop 3.1.0) and I did not check the versions in
> > the
> > > > pom
> > > > > > > file, apologies for the false alarm but better safe than sorry.
> > > > > > >
> > > > > > > With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2),
> > > tests
> > > > > > > including select, join, groupby, orderby, explain (ast, cbo,
> cbo
> > > > cost,
> > > > > > > vectorization) are working correctly, against data in ORC and
> > > parquet
> > > > > > > file format.
> > > > > > >
> > > > > > > No problem for me either when running TestBeelinePasswordOption
> > > > > locally.
> > > > > > >
> > > > > > > So my vote turns into a +1 (non-binding).
> > > > > > >
> > > > > > > Thanks a lot Denys for pushing the release process forward,
> sorry
> > > > again
> > > > > > you
> > > > > > > all for the oversight!
> > > > > > >
> > > > > > > Best regards,
> > > > > > > Alessandro
> > > > > > >
> > > > > > > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ayushtkn@gmail.com
> >
> > > > wrote:
> > > > > > >
> > > > > > > > Hi Alessandro,
> > > > > > > > From this:
> > > > > > > >
> > > > > > > > > $ sw hadoop 3.1.0
> > > > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > > >
> > > > > > > >
> > > > > > > > I guess you are using the wrong versions, The Hadoop version
> to
> > > be
> > > > > used
> > > > > > > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > > > > > > >
> > > > > > > > The error also seems to be coming from Hadoop code
> > > > > > > >
> > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >
> > > > > > > >
> > > > > > > > The compareTo method in Hadoop was changed in HADOOP-16196,
> > which
> > > > > isn't
> > > > > > > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > > > > > > >
> > > > > > > > Another stuff, TestBeelinePasswordOptions passes for me
> inside
> > > the
> > > > > > source
> > > > > > > > directory.
> > > > > > > >
> > > > > > > > [*INFO*]
> > -------------------------------------------------------
> > > > > > > >
> > > > > > > > [*INFO*]  T E S T S
> > > > > > > >
> > > > > > > > [*INFO*]
> > -------------------------------------------------------
> > > > > > > >
> > > > > > > > [*INFO*] Running
> > > > org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > > > >
> > > > > > > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0,
> > > Time
> > > > > > > elapsed:
> > > > > > > > 18.264 s - in
> > org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > > > >
> > > > > > > > -Ayush
> > > > > > > >
> > > > > > > > [1]
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > > > > > > [2]
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > > > > > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > > > > > > >
> > > > > > > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > > > > > > alessandro.solimando@gmail.com> wrote:
> > > > > > > >
> > > > > > > > > Hi everyone,
> > > > > > > > >
> > > > > > > > > unfortunately my vote is -1 (although non-binding) due to a
> > > > > classpath
> > > > > > > > error
> > > > > > > > > which prevents queries involving Tez to complete (all the
> > > details
> > > > > at
> > > > > > > the
> > > > > > > > > end of the email, apologies for the lengthy text but I
> wanted
> > > to
> > > > > > > provide
> > > > > > > > > all the context).
> > > > > > > > >
> > > > > > > > > - verified gpg signature: OK
> > > > > > > > >
> > > > > > > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > > > > > > >
> > > > > > > > > $ gpg --import KEYS
> > > > > > > > >
> > > > > > > > > ...
> > > > > > > > >
> > > > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > >
> > > > > > > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > > > > > > >
> > > > > > > > > gpg:                using RSA key
> > > > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > > > >
> > > > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > > > >
> > > > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING
> KEY) <
> > > > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > > > >
> > > > > > > > > gpg: WARNING: The key's User ID is not certified with a
> > trusted
> > > > > > > > signature!
> > > > > > > > >
> > > > > > > > > gpg:          There is no indication that the signature
> > belongs
> > > > to
> > > > > > the
> > > > > > > > > owner.
> > > > > > > > >
> > > > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9
> 07C5
> > > 682D
> > > > > > AFC7
> > > > > > > > 3125
> > > > > > > > >
> > > > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > >
> > > > > > > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > > > > > > >
> > > > > > > > > gpg:                using RSA key
> > > > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > > > >
> > > > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > > > >
> > > > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING
> KEY) <
> > > > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > > > >
> > > > > > > > > gpg: WARNING: The key's User ID is not certified with a
> > trusted
> > > > > > > > signature!
> > > > > > > > >
> > > > > > > > > gpg:          There is no indication that the signature
> > belongs
> > > > to
> > > > > > the
> > > > > > > > > owner.
> > > > > > > > >
> > > > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9
> 07C5
> > > 682D
> > > > > > AFC7
> > > > > > > > 3125
> > > > > > > > >
> > > > > > > > > (AFAIK, this warning is OK)
> > > > > > > > >
> > > > > > > > > - verified package checksum: OK
> > > > > > > > >
> > > > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256)
> > > > <(shasum
> > > > > -a
> > > > > > > 256
> > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > > > > > > >
> > > > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256)
> > > > <(shasum
> > > > > -a
> > > > > > > 256
> > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > > > > > > >
> > > > > > > > > - verified maven build (no tests): OK
> > > > > > > > >
> > > > > > > > > $ mvn clean install -DskipTests
> > > > > > > > >
> > > > > > > > > ...
> > > > > > > > >
> > > > > > > > > [INFO]
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> ------------------------------------------------------------------------
> > > > > > > > >
> > > > > > > > > [INFO] BUILD SUCCESS
> > > > > > > > >
> > > > > > > > > [INFO]
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> ------------------------------------------------------------------------
> > > > > > > > >
> > > > > > > > > [INFO] Total time:  04:31 min
> > > > > > > > >
> > > > > > > > > - checked release notes: OK
> > > > > > > > >
> > > > > > > > > - checked few modules in Nexus: OK
> > > > > > > > >
> > > > > > > > > - environment used:
> > > > > > > > >
> > > > > > > > > $ sw_vers
> > > > > > > > >
> > > > > > > > > ProductName: macOS
> > > > > > > > >
> > > > > > > > > ProductVersion: 11.6.8
> > > > > > > > >
> > > > > > > > > BuildVersion: 20G730
> > > > > > > > >
> > > > > > > > > $ mvn --version
> > > > > > > > >
> > > > > > > > > Apache Maven 3.8.1
> (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > > > > > > >
> > > > > > > > > Maven home: .../.sdkman/candidates/maven/current
> > > > > > > > >
> > > > > > > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > > > > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > > > > > > >
> > > > > > > > > Default locale: en_IE, platform encoding: UTF-8
> > > > > > > > >
> > > > > > > > > OS name: "mac os x", version: "10.16", arch: "x86_64",
> > family:
> > > > > "mac"
> > > > > > > > >
> > > > > > > > > $ java -version
> > > > > > > > >
> > > > > > > > > openjdk version "1.8.0_292"
> > > > > > > > >
> > > > > > > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build
> > 1.8.0_292-b10)
> > > > > > > > >
> > > > > > > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10,
> > mixed
> > > > > mode)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Testing in hive-dev-box (
> > > > https://github.com/kgyrtkirk/hive-dev-box
> > > > > ):
> > > > > > > KO
> > > > > > > > >
> > > > > > > > > This is the setup I have used:
> > > > > > > > >
> > > > > > > > > $ sw hadoop 3.1.0
> > > > > > > > >
> > > > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > > > >
> > > > > > > > > $ sw hive
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > >
> > > > > > > > > In what follows the test data and query I have tried, with
> > > > > associated
> > > > > > > > > stacktrace for the error. It seems a classpath issue,
> > probably
> > > > > there
> > > > > > > are
> > > > > > > > > multiple versions of the class ending up in the CP and the
> > > > > > classloader
> > > > > > > > > happened to load the “wrong one”.
> > > > > > > > >
> > > > > > > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > CREATE TABLE test_stats_c (a string, b int) STORED AS
> > PARQUET;
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON
> (t1.a =
> > > > t2.a)
> > > > > > > WHERE
> > > > > > > > > > t1.b < 3 AND t2.b > 1;
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > INFO  : Completed compiling
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > > > Time taken:
> > > > > > > > > > 4.171 seconds
> > > > > > > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > > > > > > INFO  : Executing
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > > > > > > SELECT * FROM test_sta
> > > > > > > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b
> <
> > 3
> > > > AND
> > > > > > t2.b
> > > > > > > > > 1
> > > > > > > > > > INFO  : Query ID =
> > > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > > > INFO  : Total jobs = 1
> > > > > > > > > > INFO  : Launching Job 1 out of 1
> > > > > > > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > > > > > > DEBUG : Task getting executed using mapred tag :
> > > > > > > > > >
> > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > > > > > > v
> > > > > > > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > > > INFO  : Tez session hasn't been created yet. Opening
> > session
> > > > > > > > > > DEBUG : No local resources to process (other than
> > hive-exec)
> > > > > > > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND
> t2.b
> > >
> > > 1
> > > > > > > > (Stage-1)
> > > > > > > > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT
> *
> > > FROM
> > > > > > > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > > > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > > > > > > DEBUG : Setting Tez DAG access for
> > > > > > > > > >
> > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > with
> > > > > > > > > > viewAclStr
> > > > > > > > > > ing=dev, modifyStr=dev
> > > > > > > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > > > > > > > 0.30000001192092896
> > > > > > > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > > > > > >
> [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b],
> > > Dag
> > > > > ID:
> > > > > > > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > > > > > > [application_1666888075798_0001]
> > > > > > > > > > INFO  : Status: Running (Executing on YARN cluster with
> App
> > > id
> > > > > > > > > > application_1666888075798_0001)
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > ERROR : Status: Failed
> > > > > > > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > > > > > > vertexId=vertex_1666888075798_0001_1_01,
> > diagnostics=[Vertex
> > > > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > > > failed,
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > >         at
> > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > >
> > > > > > > > >         at
> java.security.AccessController.doPrivileged(Native
> > > > > Method)
> > > > > > > > > >         at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > >
> > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > ]
> > > > > > > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > diagnostics=[Vertex
> > > > > > > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > > > failed,
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > >         at
> > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > >
> > > > > > > > >         at
> java.security.AccessController.doPrivileged(Native
> > > > > Method)
> > > > > > > > > >         at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > ]
> > > > > > > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE.
> > > > > failedVertices:2
> > > > > > > > > > killedVertices:0
> > > > > > > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > > > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex
> failed,
> > > > > > > > vertexName=Map
> > > > > > > > > > 2, vertexId=vertex_1666888075798_0001_1_01,
> > > diagnostics=[Vertex
> > > > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > > > failed,
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > >         at
> > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > java.security.AccessController.doPrivileged(Native
> > > > > > Method)
> > > > > > > > > >         at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map
> 1]
> > > > > > > > killed/failed
> > > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> > initializer
> > > > > > failed,
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > >         at
> > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > java.security.AccessController.doPrivileged(Native
> > > > > > Method)
> > > > > > > > > >         at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE.
> > failedVertices:2
> > > > > > > > > > killedVertices:0
> > > > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1
> > > JOIN
> > > > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b
> >
> > 1
> > > > > > > > > > INFO  : Completed executing
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > > > Time taken: 6.983 seconds
> > > > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1
> > > JOIN
> > > > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b
> >
> > 1
> > > > > > > > > > Error: Error while compiling statement: FAILED: Execution
> > > > Error,
> > > > > > > return
> > > > > > > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
> > > Vertex
> > > > > > > failed,
> > > > > > > > > > vertexName=Map 2,
> vertexId=vertex_1666888075798_0001_1_01,
> > > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map
> 2]
> > > > > > > > killed/failed
> > > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2
> > initializer
> > > > > > failed,
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > >         at
> > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > java.security.AccessController.doPrivileged(Native
> > > > > > Method)
> > > > > > > > > >         at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map
> 1]
> > > > > > > > killed/failed
> > > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> > initializer
> > > > > > failed,
> > > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > > >
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > > >         at
> > > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > java.security.AccessController.doPrivileged(Native
> > > > > > Method)
> > > > > > > > > >         at
> > javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE.
> > failedVertices:2
> > > > > > > > > > killedVertices:0 (state=08S01,code=2)
> > > > > > > > >
> > > > > > > > >         at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > Best regards,
> > > > > > > > > Alessandro
> > > > > > > > >
> > > > > > > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <
> > ayushtkn@gmail.com
> > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Chris,
> > > > > > > > > > The KEYS file is at:
> > > > > > > > > > https://downloads.apache.org/hive/KEYS
> > > > > > > > > >
> > > > > > > > > > -Ayush
> > > > > > > > > >
> > > > > > > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <
> > > > cnauroth@apache.org
> > > > > >
> > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Could someone please point me toward the right KEYS
> file
> > to
> > > > > > import
> > > > > > > so
> > > > > > > > > > that
> > > > > > > > > > > I can verify signatures? Thanks!
> > > > > > > > > > >
> > > > > > > > > > > I'm seeing numerous test failures due to "Insufficient
> > > > > configured
> > > > > > > > > > threads"
> > > > > > > > > > > while trying to start the HTTP server. One example is
> > > > > > > > > > > TestBeelinePasswordOption. Is anyone else seeing this?
> I
> > > > > noticed
> > > > > > > that
> > > > > > > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > > > > > > /data/conf/hive-site.xml. (The default in HiveConf.java
> > is
> > > > 50.)
> > > > > > > > > > >
> > > > > > > > > > > [INFO] Running
> > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped:
> 0,
> > > > Time
> > > > > > > > elapsed:
> > > > > > > > > > > 11.742 s <<< FAILURE! - in
> > > > > > > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > > > [ERROR]
> org.apache.hive.beeline.TestBeelinePasswordOption
> > > > Time
> > > > > > > > > elapsed:
> > > > > > > > > > > 11.733 s  <<< ERROR!
> > > > > > > > > > > org.apache.hive.service.ServiceException:
> > > > > > > > > > java.lang.IllegalStateException:
> > > > > > > > > > > Insufficient configured threads: required=4 < max=4 for
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > > > at
> > > > > > > > >
> > > > > >
> > > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > > > > > > at
> > > > org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > > Method)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > > > > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > > > > > > > at
> > > > > > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > > > > > > > at
> > > org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > > > > > > > at
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > > > > > > > Caused by: java.lang.IllegalStateException:
> Insufficient
> > > > > > configured
> > > > > > > > > > > threads: required=4 < max=4 for
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > > > > > > > at
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > >

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Stamatis Zampetakis <za...@gmail.com>.
I think that having a proper NOTICE file in jars is important to comply
with the ASF release policy:
* https://www.apache.org/legal/release-policy.html#licensing-documentation
* https://www.apache.org/legal/src-headers.html#notice
* https://www.apache.org/legal/src-headers.html#faq-binaries
The fact that the NOTICE wasn't updated in alpha-1 is most likely an
oversight.

Having said that the final decision is up to the release manager.

Best,
Stamatis

On Fri, Oct 28, 2022 at 1:57 PM Denys Kuzmenko
<dk...@cloudera.com.invalid> wrote:

> Hi Stamatis,
>
> My bad, sorry. Removed the ".imp" files and updated the release artifacts.*
> *** NO CODE CHANGES ****
> I was following the alpha-1 release and the NOTICE wasn't updated there as
> well. I don't think that should be a blocker. Noted that + javadoc
> artifacts for the new RC.
>
> fc7908f40ec854671c6795acb525649d83c071d70cf62961dc90a251a0f45e47
>  apache-hive-4.0.0-alpha-2-bin.tar.gz
> f2814aadeca56ad1d8d9f7797b99d1670f6450f68ff6cae829384c9c102cd7a9
>  apache-hive-4.0.0-alpha-2-src.tar.gz
>
> Thanks,
> Denys
>
> On Fri, Oct 28, 2022 at 12:28 PM Stamatis Zampetakis <za...@gmail.com>
> wrote:
>
> > -1 (non-binding)
> >
> > Ubuntu 20.04.5 LTS, java version "1.8.0_261", Apache Maven 3.6.3
> >
> > * Verified signatures and checksums OK
> > * Checked diff between git repo and release sources (diff -qr hive-git
> > hive-src) KO (among other *.iml files present in release sources but not
> in
> > git)
> > * Checked LICENSE, NOTICE, and README.md file OK
> > * Built from release sources (mvn clean install -DskipTests -Pitests) OK
> > * Package binaries from release sources (mvn clean package -DskipTests)
> OK
> > * Built from git tag (mvn clean install -DskipTests -Pitests) OK
> > * Run smoke tests on pseudo cluster using hive-dev-box [1] OK
> > * Spot check maven artifacts for general structure, LICENSE, NOTICE,
> > META-INF content KO (NOTICE file in hive-exec-4.0.0-alpha-2.jar has
> > copyright for 2020)
> >
> > Smoke tests included: * Derby metastore initialization * simple CREATE
> > TABLE statements; * basic INSERT INTO VALUES statements; * basic SELECT
> > statements with simple INNER JOIN, WHERE, and GROUP BY variations; *
> > EXPLAIN statement variations; * ANALYZE TABLE variations;
> >
> > The negative vote is for the spurious *.iml (IntelliJ project) files
> > present in the release sources and the outdated NOTICE file in maven
> > artifacts).
> >
> > Also javadoc artifacts are missing from maven staging repo. I checked
> > previous releases and it seems that they were not there as well so this
> is
> > not blocking but may be worth fixing for the next release.
> >
> > Best,
> > Stamatis
> >
> > [1] https://lists.apache.org/thread/7yqs7o6ncpottqx8txt0dtt9858ypsbb
> >
> >
> https://repository.apache.org/content/repositories/orgapachehive-1117/org/apache/hive/hive-exec/4.0.0-alpha-2/hive-exec-4.0.0-alpha-2.jar
> >
> > On Fri, Oct 28, 2022 at 10:32 AM Ayush Saxena <ay...@gmail.com>
> wrote:
> >
> > > +1 (non-binding)
> > > * Built from source.
> > > * Verified Checksums.
> > > * Verified Signatures
> > > * Ran some basic unit tests.
> > > * Ran some basic ACID & Iceberg related queries with Tez.
> > > * Skimmed through the Maven Artifacts, Looks Good.
> > >
> > > Thanx Denys for driving the release. Good Luck!!!
> > >
> > > -Ayush
> > >
> > > On Fri, 28 Oct 2022 at 13:46, Denys Kuzmenko <dkuzmenko@cloudera.com
> > > .invalid>
> > > wrote:
> > >
> > > > Extending voting for 24hr. 1 more +1 is needed from the PMC to
> promote
> > > the
> > > > release.
> > > > If not given, I'll be closing this vote as unsuccessful.
> > > >
> > > > On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <cn...@apache.org>
> > > > wrote:
> > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > * Verified all checksums.
> > > > > * Verified all signatures.
> > > > > * Built from source.
> > > > >     * mvn clean install -Piceberg -DskipTests
> > > > > * Tests passed.
> > > > >     * mvn --fail-never clean verify -Piceberg -Pitests
> > > > > -Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'
> > > > >
> > > > > I figured out why my test runs were failing in HTTP server
> > > > initialization.
> > > > > Jetty enforces thread leasing to warn or abort if there aren't
> enough
> > > > > threads available [1]. During startup, it attempts to lease a
> thread
> > > per
> > > > > NIO selector [2]. By default, the number of NIO selectors to use is
> > > > > determined based on available CPUs [3]. This is mostly a
> passthrough
> > to
> > > > > Runtime.availableProcessors() [4]. In my case, running on a machine
> > > with
> > > > 16
> > > > > CPUs, this ended up creating more than 4 selectors, therefore
> > requiring
> > > > > more than 4 threads and violating the lease check. I was able to
> work
> > > > > around this by passing the JETTY_AVAILABLE_PROCESSORS system
> property
> > > to
> > > > > constrain the number of CPUs available to Jetty.
> > > > >
> > > > > If we are intentionally constraining the pool to 4 threads during
> > > itests,
> > > > > then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS
> in
> > > > > maven.test.jvm.args of the root pom.xml, so that others don't run
> > into
> > > > this
> > > > > problem later? If so, I'll send a pull request.
> > > > >
> > > > > [1]
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> > > > > [2]
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> > > > > [3]
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> > > > > [4]
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
> > > > >
> > > > > Chris Nauroth
> > > > >
> > > > >
> > > > > On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> > > > > alessandro.solimando@gmail.com> wrote:
> > > > >
> > > > > > You are right Ayush, I got sidetracked by the release notes (*
> > > > > [HIVE-19217]
> > > > > > - Upgrade to Hadoop 3.1.0) and I did not check the versions in
> the
> > > pom
> > > > > > file, apologies for the false alarm but better safe than sorry.
> > > > > >
> > > > > > With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2),
> > tests
> > > > > > including select, join, groupby, orderby, explain (ast, cbo, cbo
> > > cost,
> > > > > > vectorization) are working correctly, against data in ORC and
> > parquet
> > > > > > file format.
> > > > > >
> > > > > > No problem for me either when running TestBeelinePasswordOption
> > > > locally.
> > > > > >
> > > > > > So my vote turns into a +1 (non-binding).
> > > > > >
> > > > > > Thanks a lot Denys for pushing the release process forward, sorry
> > > again
> > > > > you
> > > > > > all for the oversight!
> > > > > >
> > > > > > Best regards,
> > > > > > Alessandro
> > > > > >
> > > > > > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com>
> > > wrote:
> > > > > >
> > > > > > > Hi Alessandro,
> > > > > > > From this:
> > > > > > >
> > > > > > > > $ sw hadoop 3.1.0
> > > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > >
> > > > > > >
> > > > > > > I guess you are using the wrong versions, The Hadoop version to
> > be
> > > > used
> > > > > > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > > > > > >
> > > > > > > The error also seems to be coming from Hadoop code
> > > > > > >
> > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >
> > > > > > >
> > > > > > > The compareTo method in Hadoop was changed in HADOOP-16196,
> which
> > > > isn't
> > > > > > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > > > > > >
> > > > > > > Another stuff, TestBeelinePasswordOptions passes for me inside
> > the
> > > > > source
> > > > > > > directory.
> > > > > > >
> > > > > > > [*INFO*]
> -------------------------------------------------------
> > > > > > >
> > > > > > > [*INFO*]  T E S T S
> > > > > > >
> > > > > > > [*INFO*]
> -------------------------------------------------------
> > > > > > >
> > > > > > > [*INFO*] Running
> > > org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > > >
> > > > > > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0,
> > Time
> > > > > > elapsed:
> > > > > > > 18.264 s - in
> org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > > >
> > > > > > > -Ayush
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > > > > > [2]
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > > > > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > > > > > >
> > > > > > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > > > > > alessandro.solimando@gmail.com> wrote:
> > > > > > >
> > > > > > > > Hi everyone,
> > > > > > > >
> > > > > > > > unfortunately my vote is -1 (although non-binding) due to a
> > > > classpath
> > > > > > > error
> > > > > > > > which prevents queries involving Tez to complete (all the
> > details
> > > > at
> > > > > > the
> > > > > > > > end of the email, apologies for the lengthy text but I wanted
> > to
> > > > > > provide
> > > > > > > > all the context).
> > > > > > > >
> > > > > > > > - verified gpg signature: OK
> > > > > > > >
> > > > > > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > > > > > >
> > > > > > > > $ gpg --import KEYS
> > > > > > > >
> > > > > > > > ...
> > > > > > > >
> > > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > >
> > > > > > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > > > > > >
> > > > > > > > gpg:                using RSA key
> > > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > > >
> > > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > > >
> > > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > > >
> > > > > > > > gpg: WARNING: The key's User ID is not certified with a
> trusted
> > > > > > > signature!
> > > > > > > >
> > > > > > > > gpg:          There is no indication that the signature
> belongs
> > > to
> > > > > the
> > > > > > > > owner.
> > > > > > > >
> > > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5
> > 682D
> > > > > AFC7
> > > > > > > 3125
> > > > > > > >
> > > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > >
> > > > > > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > > > > > >
> > > > > > > > gpg:                using RSA key
> > > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > > >
> > > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > > >
> > > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > > >
> > > > > > > > gpg: WARNING: The key's User ID is not certified with a
> trusted
> > > > > > > signature!
> > > > > > > >
> > > > > > > > gpg:          There is no indication that the signature
> belongs
> > > to
> > > > > the
> > > > > > > > owner.
> > > > > > > >
> > > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5
> > 682D
> > > > > AFC7
> > > > > > > 3125
> > > > > > > >
> > > > > > > > (AFAIK, this warning is OK)
> > > > > > > >
> > > > > > > > - verified package checksum: OK
> > > > > > > >
> > > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256)
> > > <(shasum
> > > > -a
> > > > > > 256
> > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > > > > > >
> > > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256)
> > > <(shasum
> > > > -a
> > > > > > 256
> > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > > > > > >
> > > > > > > > - verified maven build (no tests): OK
> > > > > > > >
> > > > > > > > $ mvn clean install -DskipTests
> > > > > > > >
> > > > > > > > ...
> > > > > > > >
> > > > > > > > [INFO]
> > > > > > > >
> > > > > >
> > > >
> > ------------------------------------------------------------------------
> > > > > > > >
> > > > > > > > [INFO] BUILD SUCCESS
> > > > > > > >
> > > > > > > > [INFO]
> > > > > > > >
> > > > > >
> > > >
> > ------------------------------------------------------------------------
> > > > > > > >
> > > > > > > > [INFO] Total time:  04:31 min
> > > > > > > >
> > > > > > > > - checked release notes: OK
> > > > > > > >
> > > > > > > > - checked few modules in Nexus: OK
> > > > > > > >
> > > > > > > > - environment used:
> > > > > > > >
> > > > > > > > $ sw_vers
> > > > > > > >
> > > > > > > > ProductName: macOS
> > > > > > > >
> > > > > > > > ProductVersion: 11.6.8
> > > > > > > >
> > > > > > > > BuildVersion: 20G730
> > > > > > > >
> > > > > > > > $ mvn --version
> > > > > > > >
> > > > > > > > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > > > > > >
> > > > > > > > Maven home: .../.sdkman/candidates/maven/current
> > > > > > > >
> > > > > > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > > > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > > > > > >
> > > > > > > > Default locale: en_IE, platform encoding: UTF-8
> > > > > > > >
> > > > > > > > OS name: "mac os x", version: "10.16", arch: "x86_64",
> family:
> > > > "mac"
> > > > > > > >
> > > > > > > > $ java -version
> > > > > > > >
> > > > > > > > openjdk version "1.8.0_292"
> > > > > > > >
> > > > > > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build
> 1.8.0_292-b10)
> > > > > > > >
> > > > > > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10,
> mixed
> > > > mode)
> > > > > > > >
> > > > > > > >
> > > > > > > > Testing in hive-dev-box (
> > > https://github.com/kgyrtkirk/hive-dev-box
> > > > ):
> > > > > > KO
> > > > > > > >
> > > > > > > > This is the setup I have used:
> > > > > > > >
> > > > > > > > $ sw hadoop 3.1.0
> > > > > > > >
> > > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > > >
> > > > > > > > $ sw hive
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > >
> > > > > > > > In what follows the test data and query I have tried, with
> > > > associated
> > > > > > > > stacktrace for the error. It seems a classpath issue,
> probably
> > > > there
> > > > > > are
> > > > > > > > multiple versions of the class ending up in the CP and the
> > > > > classloader
> > > > > > > > happened to load the “wrong one”.
> > > > > > > >
> > > > > > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > > > > > >
> > > > > > > >
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > > > > > >
> > > > > > > >
> > > > > > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > > > > > >
> > > > > > > >
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > CREATE TABLE test_stats_c (a string, b int) STORED AS
> PARQUET;
> > > > > > > >
> > > > > > > >
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > > > > > >
> > > > > > > >
> > > > > > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a =
> > > t2.a)
> > > > > > WHERE
> > > > > > > > > t1.b < 3 AND t2.b > 1;
> > > > > > > >
> > > > > > > >
> > > > > > > > INFO  : Completed compiling
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > > Time taken:
> > > > > > > > > 4.171 seconds
> > > > > > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > > > > > INFO  : Executing
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > > > > > SELECT * FROM test_sta
> > > > > > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b <
> 3
> > > AND
> > > > > t2.b
> > > > > > > > 1
> > > > > > > > > INFO  : Query ID =
> > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > > INFO  : Total jobs = 1
> > > > > > > > > INFO  : Launching Job 1 out of 1
> > > > > > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > > > > > DEBUG : Task getting executed using mapred tag :
> > > > > > > > >
> > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > > > > > v
> > > > > > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > > INFO  : Tez session hasn't been created yet. Opening
> session
> > > > > > > > > DEBUG : No local resources to process (other than
> hive-exec)
> > > > > > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b
> >
> > 1
> > > > > > > (Stage-1)
> > > > > > > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT *
> > FROM
> > > > > > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > > > > > DEBUG : Setting Tez DAG access for
> > > > > > > > >
> > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > with
> > > > > > > > > viewAclStr
> > > > > > > > > ing=dev, modifyStr=dev
> > > > > > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > > > > > > 0.30000001192092896
> > > > > > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > > > > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b],
> > Dag
> > > > ID:
> > > > > > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > > > > > [application_1666888075798_0001]
> > > > > > > > > INFO  : Status: Running (Executing on YARN cluster with App
> > id
> > > > > > > > > application_1666888075798_0001)
> > > > > > > >
> > > > > > > >
> > > > > > > > > ERROR : Status: Failed
> > > > > > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > > > > > vertexId=vertex_1666888075798_0001_1_01,
> diagnostics=[Vertex
> > > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > > failed,
> > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > >         at
> > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > >
> > > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > > Method)
> > > > > > > > >         at
> javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > >
> > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > ]
> > > > > > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> diagnostics=[Vertex
> > > > > > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > > failed,
> > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > >         at
> > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > >
> > > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > > Method)
> > > > > > > > >         at
> javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > ]
> > > > > > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE.
> > > > failedVertices:2
> > > > > > > > > killedVertices:0
> > > > > > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > > > > > > vertexName=Map
> > > > > > > > > 2, vertexId=vertex_1666888075798_0001_1_01,
> > diagnostics=[Vertex
> > > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > > failed,
> > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > >         at
> > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> java.security.AccessController.doPrivileged(Native
> > > > > Method)
> > > > > > > > >         at
> javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > > > > killed/failed
> > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> initializer
> > > > > failed,
> > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > >         at
> > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> java.security.AccessController.doPrivileged(Native
> > > > > Method)
> > > > > > > > >         at
> javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE.
> failedVertices:2
> > > > > > > > > killedVertices:0
> > > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1
> > JOIN
> > > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b >
> 1
> > > > > > > > > INFO  : Completed executing
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > > Time taken: 6.983 seconds
> > > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1
> > JOIN
> > > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b >
> 1
> > > > > > > > > Error: Error while compiling statement: FAILED: Execution
> > > Error,
> > > > > > return
> > > > > > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
> > Vertex
> > > > > > failed,
> > > > > > > > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> > > > > > > killed/failed
> > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2
> initializer
> > > > > failed,
> > > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > >         at
> > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> java.security.AccessController.doPrivileged(Native
> > > > > Method)
> > > > > > > > >         at
> javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > > > > killed/failed
> > > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1
> initializer
> > > > > failed,
> > > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > > java.lang.NoSuchMethodError:
> > > > > > > > >
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > > >         at
> > > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> java.security.AccessController.doPrivileged(Native
> > > > > Method)
> > > > > > > > >         at
> javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE.
> failedVertices:2
> > > > > > > > > killedVertices:0 (state=08S01,code=2)
> > > > > > > >
> > > > > > > >         at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > > >
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > > Alessandro
> > > > > > > >
> > > > > > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <
> ayushtkn@gmail.com
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Chris,
> > > > > > > > > The KEYS file is at:
> > > > > > > > > https://downloads.apache.org/hive/KEYS
> > > > > > > > >
> > > > > > > > > -Ayush
> > > > > > > > >
> > > > > > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <
> > > cnauroth@apache.org
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Could someone please point me toward the right KEYS file
> to
> > > > > import
> > > > > > so
> > > > > > > > > that
> > > > > > > > > > I can verify signatures? Thanks!
> > > > > > > > > >
> > > > > > > > > > I'm seeing numerous test failures due to "Insufficient
> > > > configured
> > > > > > > > > threads"
> > > > > > > > > > while trying to start the HTTP server. One example is
> > > > > > > > > > TestBeelinePasswordOption. Is anyone else seeing this? I
> > > > noticed
> > > > > > that
> > > > > > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > > > > > /data/conf/hive-site.xml. (The default in HiveConf.java
> is
> > > 50.)
> > > > > > > > > >
> > > > > > > > > > [INFO] Running
> > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0,
> > > Time
> > > > > > > elapsed:
> > > > > > > > > > 11.742 s <<< FAILURE! - in
> > > > > > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption
> > > Time
> > > > > > > > elapsed:
> > > > > > > > > > 11.733 s  <<< ERROR!
> > > > > > > > > > org.apache.hive.service.ServiceException:
> > > > > > > > > java.lang.IllegalStateException:
> > > > > > > > > > Insufficient configured threads: required=4 < max=4 for
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > > at
> > > > > > > >
> > > > >
> > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > > > > > at
> > > org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > Method)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > > > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > > > > > > at
> > > > > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > > > > > > at
> > org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > > > > > > at
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > >
> > > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > > > > > > Caused by: java.lang.IllegalStateException: Insufficient
> > > > > configured
> > > > > > > > > > threads: required=4 < max=4 for
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > > > > > > > at org.eclipse.jetty.io
> > > > > > > > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > > > at
> org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > > > > > > > at
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > > > at
> > org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > > > > > > > at
> > > > > > > >
> > > > >
> > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > > > > > > > ... 21 more
> > > > > > > > > >
> > > > > > > > > > Chris Nauroth
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <
> > szita@apache.org
> > > >
> > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi,
> > > > > > > > > > >
> > > > > > > > > > > Thanks for rebuilding this RC, Denys.
> > > > > > > > > > >
> > > > > > > > > > > Alessandro: IMHO since there was no vote cast yet and
> > we're
> > > > > > talking
> > > > > > > > > about
> > > > > > > > > > > a build option change only, I guess it just doesn't
> worth
> > > > > > > rebuilding
> > > > > > > > > the
> > > > > > > > > > > whole stuff from scratch to create a new RC.
> > > > > > > > > > >
> > > > > > > > > > > I give +1 (binding) to this RC, I verified the
> checksum,
> > > > binary
> > > > > > > > > content,
> > > > > > > > > > > source content, built Hive from source and also tried
> out
> > > the
> > > > > > > > artifacts
> > > > > > > > > > in
> > > > > > > > > > > a mini cluster environment. Built an HMS DB with the
> > schema
> > > > > > scripts
> > > > > > > > > > > provided, did table creation, insert, delete, rollback
> > > > > (Iceberg).
> > > > > > > > > > >
> > > > > > > > > > > Thanks again, Denys for taking this up.
> > > > > > > > > > >
> > > > > > > > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > > > > > > > Hi Denys,
> > > > > > > > > > > > in other Apache communities I generally see that
> votes
> > > are
> > > > > > > > cancelled
> > > > > > > > > > and
> > > > > > > > > > > a
> > > > > > > > > > > > new RC is prepared when there are changes or blocking
> > > > issues
> > > > > > like
> > > > > > > > in
> > > > > > > > > > this
> > > > > > > > > > > > case, not sure how things are done in Hive though.
> > > > > > > > > > > >
> > > > > > > > > > > > Best regards,
> > > > > > > > > > > > Alessandro
> > > > > > > > > > > >
> > > > > > > > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > > > > > > > dkuzmenko@cloudera.com
> > > > > > > > > > > .invalid>
> > > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hi Adam,
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thanks for pointing that out! Upstream release
> guide
> > is
> > > > > > > outdated.
> > > > > > > > > > Once
> > > > > > > > > > > I
> > > > > > > > > > > > > receive the edit rights, I'll amend the
> instructions.
> > > > > > > > > > > > > Updated the release artifacts and checksums:
> > > > > > > > > > > > >
> > > > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > > > available
> > > > > > > > > > > > > here:
> > > > > > > > >
> > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > The checksums are these:
> > > > > > > > > > > > > -
> > > > > > > >
> > b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > > > -
> > > > > > > >
> > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > > > >
> > > > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > > > here:
> > > > > > > > > > > > >
> > > > > > > > > >
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > > > >
> > > > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied
> to
> > > the
> > > > > > > source
> > > > > > > > > for
> > > > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > > > >
> > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > > > >
> > > > > > > > > > > > > The git commit hash
> > > > > > > > > > > > > is:
> > > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Please check again.
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thanks,
> > > > > > > > > > > > > Denys
> > > > > > > > > > > > >
> > > > > > > > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <
> > > > > szita@apache.org
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > > > > >
> > > > > > > > > > > > > > Hi Denys,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Unfortutantely I can't give a plus 1 on this yet,
> > as
> > > > the
> > > > > > > > Iceberg
> > > > > > > > > > > > > artifacts
> > > > > > > > > > > > > > are missing from the binary tar.gz. Perhaps
> > -Piceberg
> > > > > flag
> > > > > > > was
> > > > > > > > > > > missing
> > > > > > > > > > > > > > during build, can you please rebuild?
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Thanks,
> > > > > > > > > > > > > > Adam
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > > > > > > > Hi team,
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0
> is
> > > > > > available
> > > > > > > > > > > > > > > here:
> > > > > > > > > > >
> > > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The checksums are these:
> > > > > > > > > > > > > > > -
> > > > > > > > > >
> > > > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > > > > > -
> > > > > > > > > >
> > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > > > > > here:
> > > > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > >
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been
> > applied
> > > to
> > > > > the
> > > > > > > > > source
> > > > > > > > > > > for
> > > > > > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > > > > > >
> > > > > > > >
> https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > The git commit hash
> > > > > > > > > > > > > > > is:
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > > > Thanks
> > > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Denys Kuzmenko <dk...@cloudera.com.INVALID>.
Hi Stamatis,

My bad, sorry. Removed the ".imp" files and updated the release artifacts.*
*** NO CODE CHANGES ****
I was following the alpha-1 release and the NOTICE wasn't updated there as
well. I don't think that should be a blocker. Noted that + javadoc
artifacts for the new RC.

fc7908f40ec854671c6795acb525649d83c071d70cf62961dc90a251a0f45e47
 apache-hive-4.0.0-alpha-2-bin.tar.gz
f2814aadeca56ad1d8d9f7797b99d1670f6450f68ff6cae829384c9c102cd7a9
 apache-hive-4.0.0-alpha-2-src.tar.gz

Thanks,
Denys

On Fri, Oct 28, 2022 at 12:28 PM Stamatis Zampetakis <za...@gmail.com>
wrote:

> -1 (non-binding)
>
> Ubuntu 20.04.5 LTS, java version "1.8.0_261", Apache Maven 3.6.3
>
> * Verified signatures and checksums OK
> * Checked diff between git repo and release sources (diff -qr hive-git
> hive-src) KO (among other *.iml files present in release sources but not in
> git)
> * Checked LICENSE, NOTICE, and README.md file OK
> * Built from release sources (mvn clean install -DskipTests -Pitests) OK
> * Package binaries from release sources (mvn clean package -DskipTests) OK
> * Built from git tag (mvn clean install -DskipTests -Pitests) OK
> * Run smoke tests on pseudo cluster using hive-dev-box [1] OK
> * Spot check maven artifacts for general structure, LICENSE, NOTICE,
> META-INF content KO (NOTICE file in hive-exec-4.0.0-alpha-2.jar has
> copyright for 2020)
>
> Smoke tests included: * Derby metastore initialization * simple CREATE
> TABLE statements; * basic INSERT INTO VALUES statements; * basic SELECT
> statements with simple INNER JOIN, WHERE, and GROUP BY variations; *
> EXPLAIN statement variations; * ANALYZE TABLE variations;
>
> The negative vote is for the spurious *.iml (IntelliJ project) files
> present in the release sources and the outdated NOTICE file in maven
> artifacts).
>
> Also javadoc artifacts are missing from maven staging repo. I checked
> previous releases and it seems that they were not there as well so this is
> not blocking but may be worth fixing for the next release.
>
> Best,
> Stamatis
>
> [1] https://lists.apache.org/thread/7yqs7o6ncpottqx8txt0dtt9858ypsbb
>
> https://repository.apache.org/content/repositories/orgapachehive-1117/org/apache/hive/hive-exec/4.0.0-alpha-2/hive-exec-4.0.0-alpha-2.jar
>
> On Fri, Oct 28, 2022 at 10:32 AM Ayush Saxena <ay...@gmail.com> wrote:
>
> > +1 (non-binding)
> > * Built from source.
> > * Verified Checksums.
> > * Verified Signatures
> > * Ran some basic unit tests.
> > * Ran some basic ACID & Iceberg related queries with Tez.
> > * Skimmed through the Maven Artifacts, Looks Good.
> >
> > Thanx Denys for driving the release. Good Luck!!!
> >
> > -Ayush
> >
> > On Fri, 28 Oct 2022 at 13:46, Denys Kuzmenko <dkuzmenko@cloudera.com
> > .invalid>
> > wrote:
> >
> > > Extending voting for 24hr. 1 more +1 is needed from the PMC to promote
> > the
> > > release.
> > > If not given, I'll be closing this vote as unsuccessful.
> > >
> > > On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <cn...@apache.org>
> > > wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > * Verified all checksums.
> > > > * Verified all signatures.
> > > > * Built from source.
> > > >     * mvn clean install -Piceberg -DskipTests
> > > > * Tests passed.
> > > >     * mvn --fail-never clean verify -Piceberg -Pitests
> > > > -Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'
> > > >
> > > > I figured out why my test runs were failing in HTTP server
> > > initialization.
> > > > Jetty enforces thread leasing to warn or abort if there aren't enough
> > > > threads available [1]. During startup, it attempts to lease a thread
> > per
> > > > NIO selector [2]. By default, the number of NIO selectors to use is
> > > > determined based on available CPUs [3]. This is mostly a passthrough
> to
> > > > Runtime.availableProcessors() [4]. In my case, running on a machine
> > with
> > > 16
> > > > CPUs, this ended up creating more than 4 selectors, therefore
> requiring
> > > > more than 4 threads and violating the lease check. I was able to work
> > > > around this by passing the JETTY_AVAILABLE_PROCESSORS system property
> > to
> > > > constrain the number of CPUs available to Jetty.
> > > >
> > > > If we are intentionally constraining the pool to 4 threads during
> > itests,
> > > > then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS in
> > > > maven.test.jvm.args of the root pom.xml, so that others don't run
> into
> > > this
> > > > problem later? If so, I'll send a pull request.
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> > > > [2]
> > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> > > > [3]
> > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> > > > [4]
> > > >
> > > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
> > > >
> > > > Chris Nauroth
> > > >
> > > >
> > > > On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> > > > alessandro.solimando@gmail.com> wrote:
> > > >
> > > > > You are right Ayush, I got sidetracked by the release notes (*
> > > > [HIVE-19217]
> > > > > - Upgrade to Hadoop 3.1.0) and I did not check the versions in the
> > pom
> > > > > file, apologies for the false alarm but better safe than sorry.
> > > > >
> > > > > With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2),
> tests
> > > > > including select, join, groupby, orderby, explain (ast, cbo, cbo
> > cost,
> > > > > vectorization) are working correctly, against data in ORC and
> parquet
> > > > > file format.
> > > > >
> > > > > No problem for me either when running TestBeelinePasswordOption
> > > locally.
> > > > >
> > > > > So my vote turns into a +1 (non-binding).
> > > > >
> > > > > Thanks a lot Denys for pushing the release process forward, sorry
> > again
> > > > you
> > > > > all for the oversight!
> > > > >
> > > > > Best regards,
> > > > > Alessandro
> > > > >
> > > > > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com>
> > wrote:
> > > > >
> > > > > > Hi Alessandro,
> > > > > > From this:
> > > > > >
> > > > > > > $ sw hadoop 3.1.0
> > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > >
> > > > > >
> > > > > > I guess you are using the wrong versions, The Hadoop version to
> be
> > > used
> > > > > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > > > > >
> > > > > > The error also seems to be coming from Hadoop code
> > > > > >
> > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >
> > > > > >
> > > > > > The compareTo method in Hadoop was changed in HADOOP-16196, which
> > > isn't
> > > > > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > > > > >
> > > > > > Another stuff, TestBeelinePasswordOptions passes for me inside
> the
> > > > source
> > > > > > directory.
> > > > > >
> > > > > > [*INFO*] -------------------------------------------------------
> > > > > >
> > > > > > [*INFO*]  T E S T S
> > > > > >
> > > > > > [*INFO*] -------------------------------------------------------
> > > > > >
> > > > > > [*INFO*] Running
> > org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > >
> > > > > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0,
> Time
> > > > > elapsed:
> > > > > > 18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > > >
> > > > > > -Ayush
> > > > > >
> > > > > > [1]
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > > > > [2]
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > > > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > > > > >
> > > > > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > > > > alessandro.solimando@gmail.com> wrote:
> > > > > >
> > > > > > > Hi everyone,
> > > > > > >
> > > > > > > unfortunately my vote is -1 (although non-binding) due to a
> > > classpath
> > > > > > error
> > > > > > > which prevents queries involving Tez to complete (all the
> details
> > > at
> > > > > the
> > > > > > > end of the email, apologies for the lengthy text but I wanted
> to
> > > > > provide
> > > > > > > all the context).
> > > > > > >
> > > > > > > - verified gpg signature: OK
> > > > > > >
> > > > > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > > > > >
> > > > > > > $ gpg --import KEYS
> > > > > > >
> > > > > > > ...
> > > > > > >
> > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > >
> > > > > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > > > > >
> > > > > > > gpg:                using RSA key
> > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > >
> > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > >
> > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > >
> > > > > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > > > > signature!
> > > > > > >
> > > > > > > gpg:          There is no indication that the signature belongs
> > to
> > > > the
> > > > > > > owner.
> > > > > > >
> > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5
> 682D
> > > > AFC7
> > > > > > 3125
> > > > > > >
> > > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > >
> > > > > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > > > > >
> > > > > > > gpg:                using RSA key
> > > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > > >
> > > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > > >
> > > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > > >
> > > > > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > > > > signature!
> > > > > > >
> > > > > > > gpg:          There is no indication that the signature belongs
> > to
> > > > the
> > > > > > > owner.
> > > > > > >
> > > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5
> 682D
> > > > AFC7
> > > > > > 3125
> > > > > > >
> > > > > > > (AFAIK, this warning is OK)
> > > > > > >
> > > > > > > - verified package checksum: OK
> > > > > > >
> > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256)
> > <(shasum
> > > -a
> > > > > 256
> > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > > > > >
> > > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256)
> > <(shasum
> > > -a
> > > > > 256
> > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > > > > >
> > > > > > > - verified maven build (no tests): OK
> > > > > > >
> > > > > > > $ mvn clean install -DskipTests
> > > > > > >
> > > > > > > ...
> > > > > > >
> > > > > > > [INFO]
> > > > > > >
> > > > >
> > >
> ------------------------------------------------------------------------
> > > > > > >
> > > > > > > [INFO] BUILD SUCCESS
> > > > > > >
> > > > > > > [INFO]
> > > > > > >
> > > > >
> > >
> ------------------------------------------------------------------------
> > > > > > >
> > > > > > > [INFO] Total time:  04:31 min
> > > > > > >
> > > > > > > - checked release notes: OK
> > > > > > >
> > > > > > > - checked few modules in Nexus: OK
> > > > > > >
> > > > > > > - environment used:
> > > > > > >
> > > > > > > $ sw_vers
> > > > > > >
> > > > > > > ProductName: macOS
> > > > > > >
> > > > > > > ProductVersion: 11.6.8
> > > > > > >
> > > > > > > BuildVersion: 20G730
> > > > > > >
> > > > > > > $ mvn --version
> > > > > > >
> > > > > > > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > > > > >
> > > > > > > Maven home: .../.sdkman/candidates/maven/current
> > > > > > >
> > > > > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > > > > >
> > > > > > > Default locale: en_IE, platform encoding: UTF-8
> > > > > > >
> > > > > > > OS name: "mac os x", version: "10.16", arch: "x86_64", family:
> > > "mac"
> > > > > > >
> > > > > > > $ java -version
> > > > > > >
> > > > > > > openjdk version "1.8.0_292"
> > > > > > >
> > > > > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
> > > > > > >
> > > > > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed
> > > mode)
> > > > > > >
> > > > > > >
> > > > > > > Testing in hive-dev-box (
> > https://github.com/kgyrtkirk/hive-dev-box
> > > ):
> > > > > KO
> > > > > > >
> > > > > > > This is the setup I have used:
> > > > > > >
> > > > > > > $ sw hadoop 3.1.0
> > > > > > >
> > > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > > >
> > > > > > > $ sw hive
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > >
> > > > > > > In what follows the test data and query I have tried, with
> > > associated
> > > > > > > stacktrace for the error. It seems a classpath issue, probably
> > > there
> > > > > are
> > > > > > > multiple versions of the class ending up in the CP and the
> > > > classloader
> > > > > > > happened to load the “wrong one”.
> > > > > > >
> > > > > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > > > > >
> > > > > > >
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > > > > >
> > > > > > >
> > > > > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > > > > >
> > > > > > >
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
> > > > > > >
> > > > > > >
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > > > > >
> > > > > > >
> > > > > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a =
> > t2.a)
> > > > > WHERE
> > > > > > > > t1.b < 3 AND t2.b > 1;
> > > > > > >
> > > > > > >
> > > > > > > INFO  : Completed compiling
> > > > > > > >
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > Time taken:
> > > > > > > > 4.171 seconds
> > > > > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > > > > INFO  : Executing
> > > > > > > >
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > > > > SELECT * FROM test_sta
> > > > > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3
> > AND
> > > > t2.b
> > > > > > > 1
> > > > > > > > INFO  : Query ID =
> > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > INFO  : Total jobs = 1
> > > > > > > > INFO  : Launching Job 1 out of 1
> > > > > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > > > > DEBUG : Task getting executed using mapred tag :
> > > > > > > >
> > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > > > > v
> > > > > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > > INFO  : Tez session hasn't been created yet. Opening session
> > > > > > > > DEBUG : No local resources to process (other than hive-exec)
> > > > > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b >
> 1
> > > > > > (Stage-1)
> > > > > > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT *
> FROM
> > > > > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > > > > DEBUG : Setting Tez DAG access for
> > > > > > > >
> queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > with
> > > > > > > > viewAclStr
> > > > > > > > ing=dev, modifyStr=dev
> > > > > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > > > > > 0.30000001192092896
> > > > > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > > > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b],
> Dag
> > > ID:
> > > > > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > > > > [application_1666888075798_0001]
> > > > > > > > INFO  : Status: Running (Executing on YARN cluster with App
> id
> > > > > > > > application_1666888075798_0001)
> > > > > > >
> > > > > > >
> > > > > > > > ERROR : Status: Failed
> > > > > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > > > > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > failed,
> > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > >         at
> > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > >
> > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > >
> > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > ]
> > > > > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > > > > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > > > > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > failed,
> > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > >         at
> > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > >
> > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > ]
> > > > > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE.
> > > failedVertices:2
> > > > > > > > killedVertices:0
> > > > > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > > > > > vertexName=Map
> > > > > > > > 2, vertexId=vertex_1666888075798_0001_1_01,
> diagnostics=[Vertex
> > > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > failed,
> > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > >         at
> > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > > Method)
> > > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > > > killed/failed
> > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > > > failed,
> > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > >         at
> > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > > Method)
> > > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > > > > killedVertices:0
> > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1
> JOIN
> > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > > > > INFO  : Completed executing
> > > > > > > >
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > > Time taken: 6.983 seconds
> > > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1
> JOIN
> > > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > > > > Error: Error while compiling statement: FAILED: Execution
> > Error,
> > > > > return
> > > > > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask.
> Vertex
> > > > > failed,
> > > > > > > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> > > > > > killed/failed
> > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > > > failed,
> > > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > >         at
> > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > > Method)
> > > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > > > killed/failed
> > > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > > > failed,
> > > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > > java.lang.NoSuchMethodError:
> > > > > > > >
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > > >         at
> > > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > > Method)
> > > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > > > > killedVertices:0 (state=08S01,code=2)
> > > > > > >
> > > > > > >         at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > > >
> > > > > > >
> > > > > > > Best regards,
> > > > > > > Alessandro
> > > > > > >
> > > > > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ayushtkn@gmail.com
> >
> > > > wrote:
> > > > > > >
> > > > > > > > Chris,
> > > > > > > > The KEYS file is at:
> > > > > > > > https://downloads.apache.org/hive/KEYS
> > > > > > > >
> > > > > > > > -Ayush
> > > > > > > >
> > > > > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <
> > cnauroth@apache.org
> > > >
> > > > > > wrote:
> > > > > > > >
> > > > > > > > > Could someone please point me toward the right KEYS file to
> > > > import
> > > > > so
> > > > > > > > that
> > > > > > > > > I can verify signatures? Thanks!
> > > > > > > > >
> > > > > > > > > I'm seeing numerous test failures due to "Insufficient
> > > configured
> > > > > > > > threads"
> > > > > > > > > while trying to start the HTTP server. One example is
> > > > > > > > > TestBeelinePasswordOption. Is anyone else seeing this? I
> > > noticed
> > > > > that
> > > > > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > > > > /data/conf/hive-site.xml. (The default in HiveConf.java is
> > 50.)
> > > > > > > > >
> > > > > > > > > [INFO] Running
> > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0,
> > Time
> > > > > > elapsed:
> > > > > > > > > 11.742 s <<< FAILURE! - in
> > > > > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption
> > Time
> > > > > > > elapsed:
> > > > > > > > > 11.733 s  <<< ERROR!
> > > > > > > > > org.apache.hive.service.ServiceException:
> > > > > > > > java.lang.IllegalStateException:
> > > > > > > > > Insufficient configured threads: required=4 < max=4 for
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > at
> > > > > > >
> > > >
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > > > > at
> > org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > Method)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > > > > > at
> > > > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > > > > > at
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > > > > > at
> > > > > > > >
> > > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > > > > > at
> > > > > > > > >
> > > > > > >
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > > > > > Caused by: java.lang.IllegalStateException: Insufficient
> > > > configured
> > > > > > > > > threads: required=4 < max=4 for
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > > {s=0/1,p=0}]
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > > > > > > at org.eclipse.jetty.io
> > > > > > > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > > > > > > at
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > > at
> org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > > > > > > at
> > > > > > >
> > > >
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > > > > > > ... 21 more
> > > > > > > > >
> > > > > > > > > Chris Nauroth
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <
> szita@apache.org
> > >
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi,
> > > > > > > > > >
> > > > > > > > > > Thanks for rebuilding this RC, Denys.
> > > > > > > > > >
> > > > > > > > > > Alessandro: IMHO since there was no vote cast yet and
> we're
> > > > > talking
> > > > > > > > about
> > > > > > > > > > a build option change only, I guess it just doesn't worth
> > > > > > rebuilding
> > > > > > > > the
> > > > > > > > > > whole stuff from scratch to create a new RC.
> > > > > > > > > >
> > > > > > > > > > I give +1 (binding) to this RC, I verified the checksum,
> > > binary
> > > > > > > > content,
> > > > > > > > > > source content, built Hive from source and also tried out
> > the
> > > > > > > artifacts
> > > > > > > > > in
> > > > > > > > > > a mini cluster environment. Built an HMS DB with the
> schema
> > > > > scripts
> > > > > > > > > > provided, did table creation, insert, delete, rollback
> > > > (Iceberg).
> > > > > > > > > >
> > > > > > > > > > Thanks again, Denys for taking this up.
> > > > > > > > > >
> > > > > > > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > > > > > > Hi Denys,
> > > > > > > > > > > in other Apache communities I generally see that votes
> > are
> > > > > > > cancelled
> > > > > > > > > and
> > > > > > > > > > a
> > > > > > > > > > > new RC is prepared when there are changes or blocking
> > > issues
> > > > > like
> > > > > > > in
> > > > > > > > > this
> > > > > > > > > > > case, not sure how things are done in Hive though.
> > > > > > > > > > >
> > > > > > > > > > > Best regards,
> > > > > > > > > > > Alessandro
> > > > > > > > > > >
> > > > > > > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > > > > > > dkuzmenko@cloudera.com
> > > > > > > > > > .invalid>
> > > > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hi Adam,
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks for pointing that out! Upstream release guide
> is
> > > > > > outdated.
> > > > > > > > > Once
> > > > > > > > > > I
> > > > > > > > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > > > > > > > Updated the release artifacts and checksums:
> > > > > > > > > > > >
> > > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > > available
> > > > > > > > > > > > here:
> > > > > > > >
> > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > The checksums are these:
> > > > > > > > > > > > -
> > > > > > >
> b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > > -
> > > > > > >
> 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > > >
> > > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > > here:
> > > > > > > > > > > >
> > > > > > > > >
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > > >
> > > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to
> > the
> > > > > > source
> > > > > > > > for
> > > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > > >
> > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > > >
> > > > > > > > > > > > The git commit hash
> > > > > > > > > > > > is:
> > > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Please check again.
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks,
> > > > > > > > > > > > Denys
> > > > > > > > > > > >
> > > > > > > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <
> > > > szita@apache.org
> > > > > >
> > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hi Denys,
> > > > > > > > > > > > >
> > > > > > > > > > > > > Unfortutantely I can't give a plus 1 on this yet,
> as
> > > the
> > > > > > > Iceberg
> > > > > > > > > > > > artifacts
> > > > > > > > > > > > > are missing from the binary tar.gz. Perhaps
> -Piceberg
> > > > flag
> > > > > > was
> > > > > > > > > > missing
> > > > > > > > > > > > > during build, can you please rebuild?
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thanks,
> > > > > > > > > > > > > Adam
> > > > > > > > > > > > >
> > > > > > > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > > > > > > Hi team,
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > > > > available
> > > > > > > > > > > > > > here:
> > > > > > > > > >
> > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > > > > >
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The checksums are these:
> > > > > > > > > > > > > > -
> > > > > > > > >
> > > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > > > > -
> > > > > > > > >
> > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > > > > here:
> > > > > > > > > > > > >
> > > > > > > > > >
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been
> applied
> > to
> > > > the
> > > > > > > > source
> > > > > > > > > > for
> > > > > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > > > > >
> > > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > The git commit hash
> > > > > > > > > > > > > > is:
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > > > > > > >
> > > > > > > > > > > > > > Thanks
> > > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Stamatis Zampetakis <za...@gmail.com>.
-1 (non-binding)

Ubuntu 20.04.5 LTS, java version "1.8.0_261", Apache Maven 3.6.3

* Verified signatures and checksums OK
* Checked diff between git repo and release sources (diff -qr hive-git
hive-src) KO (among other *.iml files present in release sources but not in
git)
* Checked LICENSE, NOTICE, and README.md file OK
* Built from release sources (mvn clean install -DskipTests -Pitests) OK
* Package binaries from release sources (mvn clean package -DskipTests) OK
* Built from git tag (mvn clean install -DskipTests -Pitests) OK
* Run smoke tests on pseudo cluster using hive-dev-box [1] OK
* Spot check maven artifacts for general structure, LICENSE, NOTICE,
META-INF content KO (NOTICE file in hive-exec-4.0.0-alpha-2.jar has
copyright for 2020)

Smoke tests included: * Derby metastore initialization * simple CREATE
TABLE statements; * basic INSERT INTO VALUES statements; * basic SELECT
statements with simple INNER JOIN, WHERE, and GROUP BY variations; *
EXPLAIN statement variations; * ANALYZE TABLE variations;

The negative vote is for the spurious *.iml (IntelliJ project) files
present in the release sources and the outdated NOTICE file in maven
artifacts).

Also javadoc artifacts are missing from maven staging repo. I checked
previous releases and it seems that they were not there as well so this is
not blocking but may be worth fixing for the next release.

Best,
Stamatis

[1] https://lists.apache.org/thread/7yqs7o6ncpottqx8txt0dtt9858ypsbb
https://repository.apache.org/content/repositories/orgapachehive-1117/org/apache/hive/hive-exec/4.0.0-alpha-2/hive-exec-4.0.0-alpha-2.jar

On Fri, Oct 28, 2022 at 10:32 AM Ayush Saxena <ay...@gmail.com> wrote:

> +1 (non-binding)
> * Built from source.
> * Verified Checksums.
> * Verified Signatures
> * Ran some basic unit tests.
> * Ran some basic ACID & Iceberg related queries with Tez.
> * Skimmed through the Maven Artifacts, Looks Good.
>
> Thanx Denys for driving the release. Good Luck!!!
>
> -Ayush
>
> On Fri, 28 Oct 2022 at 13:46, Denys Kuzmenko <dkuzmenko@cloudera.com
> .invalid>
> wrote:
>
> > Extending voting for 24hr. 1 more +1 is needed from the PMC to promote
> the
> > release.
> > If not given, I'll be closing this vote as unsuccessful.
> >
> > On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <cn...@apache.org>
> > wrote:
> >
> > > +1 (non-binding)
> > >
> > > * Verified all checksums.
> > > * Verified all signatures.
> > > * Built from source.
> > >     * mvn clean install -Piceberg -DskipTests
> > > * Tests passed.
> > >     * mvn --fail-never clean verify -Piceberg -Pitests
> > > -Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'
> > >
> > > I figured out why my test runs were failing in HTTP server
> > initialization.
> > > Jetty enforces thread leasing to warn or abort if there aren't enough
> > > threads available [1]. During startup, it attempts to lease a thread
> per
> > > NIO selector [2]. By default, the number of NIO selectors to use is
> > > determined based on available CPUs [3]. This is mostly a passthrough to
> > > Runtime.availableProcessors() [4]. In my case, running on a machine
> with
> > 16
> > > CPUs, this ended up creating more than 4 selectors, therefore requiring
> > > more than 4 threads and violating the lease check. I was able to work
> > > around this by passing the JETTY_AVAILABLE_PROCESSORS system property
> to
> > > constrain the number of CPUs available to Jetty.
> > >
> > > If we are intentionally constraining the pool to 4 threads during
> itests,
> > > then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS in
> > > maven.test.jvm.args of the root pom.xml, so that others don't run into
> > this
> > > problem later? If so, I'll send a pull request.
> > >
> > > [1]
> > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> > > [2]
> > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> > > [3]
> > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> > > [4]
> > >
> > >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
> > >
> > > Chris Nauroth
> > >
> > >
> > > On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> > > alessandro.solimando@gmail.com> wrote:
> > >
> > > > You are right Ayush, I got sidetracked by the release notes (*
> > > [HIVE-19217]
> > > > - Upgrade to Hadoop 3.1.0) and I did not check the versions in the
> pom
> > > > file, apologies for the false alarm but better safe than sorry.
> > > >
> > > > With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2), tests
> > > > including select, join, groupby, orderby, explain (ast, cbo, cbo
> cost,
> > > > vectorization) are working correctly, against data in ORC and parquet
> > > > file format.
> > > >
> > > > No problem for me either when running TestBeelinePasswordOption
> > locally.
> > > >
> > > > So my vote turns into a +1 (non-binding).
> > > >
> > > > Thanks a lot Denys for pushing the release process forward, sorry
> again
> > > you
> > > > all for the oversight!
> > > >
> > > > Best regards,
> > > > Alessandro
> > > >
> > > > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com>
> wrote:
> > > >
> > > > > Hi Alessandro,
> > > > > From this:
> > > > >
> > > > > > $ sw hadoop 3.1.0
> > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > >
> > > > >
> > > > > I guess you are using the wrong versions, The Hadoop version to be
> > used
> > > > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > > > >
> > > > > The error also seems to be coming from Hadoop code
> > > > >
> > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >
> > > > >
> > > > > The compareTo method in Hadoop was changed in HADOOP-16196, which
> > isn't
> > > > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > > > >
> > > > > Another stuff, TestBeelinePasswordOptions passes for me inside the
> > > source
> > > > > directory.
> > > > >
> > > > > [*INFO*] -------------------------------------------------------
> > > > >
> > > > > [*INFO*]  T E S T S
> > > > >
> > > > > [*INFO*] -------------------------------------------------------
> > > > >
> > > > > [*INFO*] Running
> org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > >
> > > > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0, Time
> > > > elapsed:
> > > > > 18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > > >
> > > > > -Ayush
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > > > [2]
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > > > >
> > > > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > > > alessandro.solimando@gmail.com> wrote:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > unfortunately my vote is -1 (although non-binding) due to a
> > classpath
> > > > > error
> > > > > > which prevents queries involving Tez to complete (all the details
> > at
> > > > the
> > > > > > end of the email, apologies for the lengthy text but I wanted to
> > > > provide
> > > > > > all the context).
> > > > > >
> > > > > > - verified gpg signature: OK
> > > > > >
> > > > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > > > >
> > > > > > $ gpg --import KEYS
> > > > > >
> > > > > > ...
> > > > > >
> > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > >
> > > > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > > > >
> > > > > > gpg:                using RSA key
> > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > >
> > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > >
> > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > >
> > > > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > > > signature!
> > > > > >
> > > > > > gpg:          There is no indication that the signature belongs
> to
> > > the
> > > > > > owner.
> > > > > >
> > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D
> > > AFC7
> > > > > 3125
> > > > > >
> > > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > >
> > > > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > > > >
> > > > > > gpg:                using RSA key
> > > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > > >
> > > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > > >
> > > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > > dkuzmenko@apache.org>" [unknown]
> > > > > >
> > > > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > > > signature!
> > > > > >
> > > > > > gpg:          There is no indication that the signature belongs
> to
> > > the
> > > > > > owner.
> > > > > >
> > > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D
> > > AFC7
> > > > > 3125
> > > > > >
> > > > > > (AFAIK, this warning is OK)
> > > > > >
> > > > > > - verified package checksum: OK
> > > > > >
> > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256)
> <(shasum
> > -a
> > > > 256
> > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > > > >
> > > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256)
> <(shasum
> > -a
> > > > 256
> > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > > > >
> > > > > > - verified maven build (no tests): OK
> > > > > >
> > > > > > $ mvn clean install -DskipTests
> > > > > >
> > > > > > ...
> > > > > >
> > > > > > [INFO]
> > > > > >
> > > >
> > ------------------------------------------------------------------------
> > > > > >
> > > > > > [INFO] BUILD SUCCESS
> > > > > >
> > > > > > [INFO]
> > > > > >
> > > >
> > ------------------------------------------------------------------------
> > > > > >
> > > > > > [INFO] Total time:  04:31 min
> > > > > >
> > > > > > - checked release notes: OK
> > > > > >
> > > > > > - checked few modules in Nexus: OK
> > > > > >
> > > > > > - environment used:
> > > > > >
> > > > > > $ sw_vers
> > > > > >
> > > > > > ProductName: macOS
> > > > > >
> > > > > > ProductVersion: 11.6.8
> > > > > >
> > > > > > BuildVersion: 20G730
> > > > > >
> > > > > > $ mvn --version
> > > > > >
> > > > > > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > > > >
> > > > > > Maven home: .../.sdkman/candidates/maven/current
> > > > > >
> > > > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > > > >
> > > > > > Default locale: en_IE, platform encoding: UTF-8
> > > > > >
> > > > > > OS name: "mac os x", version: "10.16", arch: "x86_64", family:
> > "mac"
> > > > > >
> > > > > > $ java -version
> > > > > >
> > > > > > openjdk version "1.8.0_292"
> > > > > >
> > > > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
> > > > > >
> > > > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed
> > mode)
> > > > > >
> > > > > >
> > > > > > Testing in hive-dev-box (
> https://github.com/kgyrtkirk/hive-dev-box
> > ):
> > > > KO
> > > > > >
> > > > > > This is the setup I have used:
> > > > > >
> > > > > > $ sw hadoop 3.1.0
> > > > > >
> > > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > > >
> > > > > > $ sw hive
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > >
> > > > > > In what follows the test data and query I have tried, with
> > associated
> > > > > > stacktrace for the error. It seems a classpath issue, probably
> > there
> > > > are
> > > > > > multiple versions of the class ending up in the CP and the
> > > classloader
> > > > > > happened to load the “wrong one”.
> > > > > >
> > > > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > > > >
> > > > > >
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > > > >
> > > > > >
> > > > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > > > >
> > > > > >
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > > > >
> > > > > >
> > > > > >
> > > > > > CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
> > > > > >
> > > > > >
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > > > >
> > > > > >
> > > > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a =
> t2.a)
> > > > WHERE
> > > > > > > t1.b < 3 AND t2.b > 1;
> > > > > >
> > > > > >
> > > > > > INFO  : Completed compiling
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > Time taken:
> > > > > > > 4.171 seconds
> > > > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > > > INFO  : Executing
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > > > SELECT * FROM test_sta
> > > > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3
> AND
> > > t2.b
> > > > > > 1
> > > > > > > INFO  : Query ID =
> > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > INFO  : Total jobs = 1
> > > > > > > INFO  : Launching Job 1 out of 1
> > > > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > > > DEBUG : Task getting executed using mapred tag :
> > > > > > >
> dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > > > v
> > > > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > > INFO  : Tez session hasn't been created yet. Opening session
> > > > > > > DEBUG : No local resources to process (other than hive-exec)
> > > > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1
> > > > > (Stage-1)
> > > > > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> > > > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > > > DEBUG : Setting Tez DAG access for
> > > > > > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > with
> > > > > > > viewAclStr
> > > > > > > ing=dev, modifyStr=dev
> > > > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > > > > 0.30000001192092896
> > > > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag
> > ID:
> > > > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > > > [application_1666888075798_0001]
> > > > > > > INFO  : Status: Running (Executing on YARN cluster with App id
> > > > > > > application_1666888075798_0001)
> > > > > >
> > > > > >
> > > > > > > ERROR : Status: Failed
> > > > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > > > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> failed,
> > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > >         at
> > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > >
> > > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > >
> > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > ]
> > > > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > > > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > > > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> failed,
> > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > >         at
> > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > >
> > > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > ]
> > > > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE.
> > failedVertices:2
> > > > > > > killedVertices:0
> > > > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > > > > vertexName=Map
> > > > > > > 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> failed,
> > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > >         at
> > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > > killed/failed
> > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > > failed,
> > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > >         at
> > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > > > killedVertices:0
> > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > > > INFO  : Completed executing
> > > > > > >
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > > Time taken: 6.983 seconds
> > > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > > > Error: Error while compiling statement: FAILED: Execution
> Error,
> > > > return
> > > > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex
> > > > failed,
> > > > > > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> > > > > killed/failed
> > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > > failed,
> > > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > >         at
> > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > > killed/failed
> > > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > > failed,
> > > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > > java.lang.NoSuchMethodError:
> > > > > > >
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > > >         at
> > > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > > >         at java.security.AccessController.doPrivileged(Native
> > > Method)
> > > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > > > killedVertices:0 (state=08S01,code=2)
> > > > > >
> > > > > >         at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > > >
> > > > > >
> > > > > > Best regards,
> > > > > > Alessandro
> > > > > >
> > > > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com>
> > > wrote:
> > > > > >
> > > > > > > Chris,
> > > > > > > The KEYS file is at:
> > > > > > > https://downloads.apache.org/hive/KEYS
> > > > > > >
> > > > > > > -Ayush
> > > > > > >
> > > > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <
> cnauroth@apache.org
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Could someone please point me toward the right KEYS file to
> > > import
> > > > so
> > > > > > > that
> > > > > > > > I can verify signatures? Thanks!
> > > > > > > >
> > > > > > > > I'm seeing numerous test failures due to "Insufficient
> > configured
> > > > > > > threads"
> > > > > > > > while trying to start the HTTP server. One example is
> > > > > > > > TestBeelinePasswordOption. Is anyone else seeing this? I
> > noticed
> > > > that
> > > > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > > > /data/conf/hive-site.xml. (The default in HiveConf.java is
> 50.)
> > > > > > > >
> > > > > > > > [INFO] Running
> > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0,
> Time
> > > > > elapsed:
> > > > > > > > 11.742 s <<< FAILURE! - in
> > > > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption
> Time
> > > > > > elapsed:
> > > > > > > > 11.733 s  <<< ERROR!
> > > > > > > > org.apache.hive.service.ServiceException:
> > > > > > > java.lang.IllegalStateException:
> > > > > > > > Insufficient configured threads: required=4 < max=4 for
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > {s=0/1,p=0}]
> > > > > > > > at
> > > > > >
> > > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > > > at
> org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > > > > at
> > > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > > > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > > > > at
> > > > > > >
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > > > > at
> > > > > > > >
> > > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > > > > Caused by: java.lang.IllegalStateException: Insufficient
> > > configured
> > > > > > > > threads: required=4 < max=4 for
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > > {s=0/1,p=0}]
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > > > > > at org.eclipse.jetty.io
> > > > > > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > > > > > at
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > > > > > at
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > > > > > at
> > > > > >
> > > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > > > > > ... 21 more
> > > > > > > >
> > > > > > > > Chris Nauroth
> > > > > > > >
> > > > > > > >
> > > > > > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <szita@apache.org
> >
> > > > wrote:
> > > > > > > >
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > Thanks for rebuilding this RC, Denys.
> > > > > > > > >
> > > > > > > > > Alessandro: IMHO since there was no vote cast yet and we're
> > > > talking
> > > > > > > about
> > > > > > > > > a build option change only, I guess it just doesn't worth
> > > > > rebuilding
> > > > > > > the
> > > > > > > > > whole stuff from scratch to create a new RC.
> > > > > > > > >
> > > > > > > > > I give +1 (binding) to this RC, I verified the checksum,
> > binary
> > > > > > > content,
> > > > > > > > > source content, built Hive from source and also tried out
> the
> > > > > > artifacts
> > > > > > > > in
> > > > > > > > > a mini cluster environment. Built an HMS DB with the schema
> > > > scripts
> > > > > > > > > provided, did table creation, insert, delete, rollback
> > > (Iceberg).
> > > > > > > > >
> > > > > > > > > Thanks again, Denys for taking this up.
> > > > > > > > >
> > > > > > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > > > > > Hi Denys,
> > > > > > > > > > in other Apache communities I generally see that votes
> are
> > > > > > cancelled
> > > > > > > > and
> > > > > > > > > a
> > > > > > > > > > new RC is prepared when there are changes or blocking
> > issues
> > > > like
> > > > > > in
> > > > > > > > this
> > > > > > > > > > case, not sure how things are done in Hive though.
> > > > > > > > > >
> > > > > > > > > > Best regards,
> > > > > > > > > > Alessandro
> > > > > > > > > >
> > > > > > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > > > > > dkuzmenko@cloudera.com
> > > > > > > > > .invalid>
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Adam,
> > > > > > > > > > >
> > > > > > > > > > > Thanks for pointing that out! Upstream release guide is
> > > > > outdated.
> > > > > > > > Once
> > > > > > > > > I
> > > > > > > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > > > > > > Updated the release artifacts and checksums:
> > > > > > > > > > >
> > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > available
> > > > > > > > > > > here:
> > > > > > >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > The checksums are these:
> > > > > > > > > > > -
> > > > > > b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > -
> > > > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > >
> > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > here:
> > > > > > > > > > >
> > > > > > > >
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > >
> > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to
> the
> > > > > source
> > > > > > > for
> > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > >
> > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > >
> > > > > > > > > > > The git commit hash
> > > > > > > > > > > is:
> > > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Please check again.
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks,
> > > > > > > > > > > Denys
> > > > > > > > > > >
> > > > > > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <
> > > szita@apache.org
> > > > >
> > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hi Denys,
> > > > > > > > > > > >
> > > > > > > > > > > > Unfortutantely I can't give a plus 1 on this yet, as
> > the
> > > > > > Iceberg
> > > > > > > > > > > artifacts
> > > > > > > > > > > > are missing from the binary tar.gz. Perhaps -Piceberg
> > > flag
> > > > > was
> > > > > > > > > missing
> > > > > > > > > > > > during build, can you please rebuild?
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks,
> > > > > > > > > > > > Adam
> > > > > > > > > > > >
> > > > > > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > > > > > Hi team,
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > > > available
> > > > > > > > > > > > > here:
> > > > > > > > >
> > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > The checksums are these:
> > > > > > > > > > > > > -
> > > > > > > >
> > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > > > -
> > > > > > > >
> > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > > > >
> > > > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > > > here:
> > > > > > > > > > > >
> > > > > > > > >
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > > > >
> > > > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied
> to
> > > the
> > > > > > > source
> > > > > > > > > for
> > > > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > > > >
> > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > > > >
> > > > > > > > > > > > > The git commit hash
> > > > > > > > > > > > > is:
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > > > >
> > > > > > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thanks
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Ayush Saxena <ay...@gmail.com>.
+1 (non-binding)
* Built from source.
* Verified Checksums.
* Verified Signatures
* Ran some basic unit tests.
* Ran some basic ACID & Iceberg related queries with Tez.
* Skimmed through the Maven Artifacts, Looks Good.

Thanx Denys for driving the release. Good Luck!!!

-Ayush

On Fri, 28 Oct 2022 at 13:46, Denys Kuzmenko <dk...@cloudera.com.invalid>
wrote:

> Extending voting for 24hr. 1 more +1 is needed from the PMC to promote the
> release.
> If not given, I'll be closing this vote as unsuccessful.
>
> On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <cn...@apache.org>
> wrote:
>
> > +1 (non-binding)
> >
> > * Verified all checksums.
> > * Verified all signatures.
> > * Built from source.
> >     * mvn clean install -Piceberg -DskipTests
> > * Tests passed.
> >     * mvn --fail-never clean verify -Piceberg -Pitests
> > -Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'
> >
> > I figured out why my test runs were failing in HTTP server
> initialization.
> > Jetty enforces thread leasing to warn or abort if there aren't enough
> > threads available [1]. During startup, it attempts to lease a thread per
> > NIO selector [2]. By default, the number of NIO selectors to use is
> > determined based on available CPUs [3]. This is mostly a passthrough to
> > Runtime.availableProcessors() [4]. In my case, running on a machine with
> 16
> > CPUs, this ended up creating more than 4 selectors, therefore requiring
> > more than 4 threads and violating the lease check. I was able to work
> > around this by passing the JETTY_AVAILABLE_PROCESSORS system property to
> > constrain the number of CPUs available to Jetty.
> >
> > If we are intentionally constraining the pool to 4 threads during itests,
> > then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS in
> > maven.test.jvm.args of the root pom.xml, so that others don't run into
> this
> > problem later? If so, I'll send a pull request.
> >
> > [1]
> >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> > [2]
> >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> > [3]
> >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> > [4]
> >
> >
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
> >
> > Chris Nauroth
> >
> >
> > On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> > alessandro.solimando@gmail.com> wrote:
> >
> > > You are right Ayush, I got sidetracked by the release notes (*
> > [HIVE-19217]
> > > - Upgrade to Hadoop 3.1.0) and I did not check the versions in the pom
> > > file, apologies for the false alarm but better safe than sorry.
> > >
> > > With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2), tests
> > > including select, join, groupby, orderby, explain (ast, cbo, cbo cost,
> > > vectorization) are working correctly, against data in ORC and parquet
> > > file format.
> > >
> > > No problem for me either when running TestBeelinePasswordOption
> locally.
> > >
> > > So my vote turns into a +1 (non-binding).
> > >
> > > Thanks a lot Denys for pushing the release process forward, sorry again
> > you
> > > all for the oversight!
> > >
> > > Best regards,
> > > Alessandro
> > >
> > > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com> wrote:
> > >
> > > > Hi Alessandro,
> > > > From this:
> > > >
> > > > > $ sw hadoop 3.1.0
> > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > >
> > > >
> > > > I guess you are using the wrong versions, The Hadoop version to be
> used
> > > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > > >
> > > > The error also seems to be coming from Hadoop code
> > > >
> > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >
> > > >
> > > > The compareTo method in Hadoop was changed in HADOOP-16196, which
> isn't
> > > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > > >
> > > > Another stuff, TestBeelinePasswordOptions passes for me inside the
> > source
> > > > directory.
> > > >
> > > > [*INFO*] -------------------------------------------------------
> > > >
> > > > [*INFO*]  T E S T S
> > > >
> > > > [*INFO*] -------------------------------------------------------
> > > >
> > > > [*INFO*] Running org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > >
> > > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0, Time
> > > elapsed:
> > > > 18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*
> > > >
> > > > -Ayush
> > > >
> > > > [1]
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > > [2]
> > > >
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > > >
> > > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > > alessandro.solimando@gmail.com> wrote:
> > > >
> > > > > Hi everyone,
> > > > >
> > > > > unfortunately my vote is -1 (although non-binding) due to a
> classpath
> > > > error
> > > > > which prevents queries involving Tez to complete (all the details
> at
> > > the
> > > > > end of the email, apologies for the lengthy text but I wanted to
> > > provide
> > > > > all the context).
> > > > >
> > > > > - verified gpg signature: OK
> > > > >
> > > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > > >
> > > > > $ gpg --import KEYS
> > > > >
> > > > > ...
> > > > >
> > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > >
> > > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > > >
> > > > > gpg:                using RSA key
> > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > >
> > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > >
> > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > dkuzmenko@apache.org>" [unknown]
> > > > >
> > > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > > signature!
> > > > >
> > > > > gpg:          There is no indication that the signature belongs to
> > the
> > > > > owner.
> > > > >
> > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D
> > AFC7
> > > > 3125
> > > > >
> > > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > >
> > > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > > >
> > > > > gpg:                using RSA key
> > > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > > >
> > > > > gpg:                issuer "dkuzmenko@apache.org"
> > > > >
> > > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > > dkuzmenko@apache.org>" [unknown]
> > > > >
> > > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > > signature!
> > > > >
> > > > > gpg:          There is no indication that the signature belongs to
> > the
> > > > > owner.
> > > > >
> > > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D
> > AFC7
> > > > 3125
> > > > >
> > > > > (AFAIK, this warning is OK)
> > > > >
> > > > > - verified package checksum: OK
> > > > >
> > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256) <(shasum
> -a
> > > 256
> > > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > > >
> > > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256) <(shasum
> -a
> > > 256
> > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > > >
> > > > > - verified maven build (no tests): OK
> > > > >
> > > > > $ mvn clean install -DskipTests
> > > > >
> > > > > ...
> > > > >
> > > > > [INFO]
> > > > >
> > >
> ------------------------------------------------------------------------
> > > > >
> > > > > [INFO] BUILD SUCCESS
> > > > >
> > > > > [INFO]
> > > > >
> > >
> ------------------------------------------------------------------------
> > > > >
> > > > > [INFO] Total time:  04:31 min
> > > > >
> > > > > - checked release notes: OK
> > > > >
> > > > > - checked few modules in Nexus: OK
> > > > >
> > > > > - environment used:
> > > > >
> > > > > $ sw_vers
> > > > >
> > > > > ProductName: macOS
> > > > >
> > > > > ProductVersion: 11.6.8
> > > > >
> > > > > BuildVersion: 20G730
> > > > >
> > > > > $ mvn --version
> > > > >
> > > > > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > > >
> > > > > Maven home: .../.sdkman/candidates/maven/current
> > > > >
> > > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > > >
> > > > > Default locale: en_IE, platform encoding: UTF-8
> > > > >
> > > > > OS name: "mac os x", version: "10.16", arch: "x86_64", family:
> "mac"
> > > > >
> > > > > $ java -version
> > > > >
> > > > > openjdk version "1.8.0_292"
> > > > >
> > > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
> > > > >
> > > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed
> mode)
> > > > >
> > > > >
> > > > > Testing in hive-dev-box (https://github.com/kgyrtkirk/hive-dev-box
> ):
> > > KO
> > > > >
> > > > > This is the setup I have used:
> > > > >
> > > > > $ sw hadoop 3.1.0
> > > > >
> > > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > > >
> > > > > $ sw hive
> > > > >
> > > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > >
> > > > > In what follows the test data and query I have tried, with
> associated
> > > > > stacktrace for the error. It seems a classpath issue, probably
> there
> > > are
> > > > > multiple versions of the class ending up in the CP and the
> > classloader
> > > > > happened to load the “wrong one”.
> > > > >
> > > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > > >
> > > > >
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > > >
> > > > >
> > > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > > >
> > > > >
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > > >
> > > > >
> > > > >
> > > > > CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
> > > > >
> > > > >
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > > >
> > > > >
> > > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a)
> > > WHERE
> > > > > > t1.b < 3 AND t2.b > 1;
> > > > >
> > > > >
> > > > > INFO  : Completed compiling
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > Time taken:
> > > > > > 4.171 seconds
> > > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > > INFO  : Executing
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > > SELECT * FROM test_sta
> > > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND
> > t2.b
> > > > > 1
> > > > > > INFO  : Query ID =
> > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > INFO  : Total jobs = 1
> > > > > > INFO  : Launching Job 1 out of 1
> > > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > > DEBUG : Task getting executed using mapred tag :
> > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > > v
> > > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > > INFO  : Tez session hasn't been created yet. Opening session
> > > > > > DEBUG : No local resources to process (other than hive-exec)
> > > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1
> > > > (Stage-1)
> > > > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> > > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > > DEBUG : Setting Tez DAG access for
> > > > > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > with
> > > > > > viewAclStr
> > > > > > ing=dev, modifyStr=dev
> > > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > > > 0.30000001192092896
> > > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag
> ID:
> > > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > > [application_1666888075798_0001]
> > > > > > INFO  : Status: Running (Executing on YARN cluster with App id
> > > > > > application_1666888075798_0001)
> > > > >
> > > > >
> > > > > > ERROR : Status: Failed
> > > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > >         at
> > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > >
> > > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > >
> > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > ]
> > > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > >         at
> > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > >
> > > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > ]
> > > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE.
> failedVertices:2
> > > > > > killedVertices:0
> > > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > > > vertexName=Map
> > > > > > 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > >         at
> > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > killed/failed
> > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > failed,
> > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > >         at
> > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > > killedVertices:0
> > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > > INFO  : Completed executing
> > > > > >
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > > Time taken: 6.983 seconds
> > > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > > Error: Error while compiling statement: FAILED: Execution Error,
> > > return
> > > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex
> > > failed,
> > > > > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> > > > killed/failed
> > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> > failed,
> > > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > >         at
> > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > ]Vertex failed, vertexName=Map 1,
> > > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > > killed/failed
> > > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> > failed,
> > > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > > java.lang.NoSuchMethodError:
> > > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > > >         at
> > > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > > >         at java.security.AccessController.doPrivileged(Native
> > Method)
> > > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > > killedVertices:0 (state=08S01,code=2)
> > > > >
> > > > >         at
> > > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > > >
> > > > >
> > > > > Best regards,
> > > > > Alessandro
> > > > >
> > > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com>
> > wrote:
> > > > >
> > > > > > Chris,
> > > > > > The KEYS file is at:
> > > > > > https://downloads.apache.org/hive/KEYS
> > > > > >
> > > > > > -Ayush
> > > > > >
> > > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cnauroth@apache.org
> >
> > > > wrote:
> > > > > >
> > > > > > > Could someone please point me toward the right KEYS file to
> > import
> > > so
> > > > > > that
> > > > > > > I can verify signatures? Thanks!
> > > > > > >
> > > > > > > I'm seeing numerous test failures due to "Insufficient
> configured
> > > > > > threads"
> > > > > > > while trying to start the HTTP server. One example is
> > > > > > > TestBeelinePasswordOption. Is anyone else seeing this? I
> noticed
> > > that
> > > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > > /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
> > > > > > >
> > > > > > > [INFO] Running
> org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> > > > elapsed:
> > > > > > > 11.742 s <<< FAILURE! - in
> > > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time
> > > > > elapsed:
> > > > > > > 11.733 s  <<< ERROR!
> > > > > > > org.apache.hive.service.ServiceException:
> > > > > > java.lang.IllegalStateException:
> > > > > > > Insufficient configured threads: required=4 < max=4 for
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > {s=0/1,p=0}]
> > > > > > > at
> > > > >
> > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > > > at
> > org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > > > at
> > > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > > > at
> > > > > > >
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > > > Caused by: java.lang.IllegalStateException: Insufficient
> > configured
> > > > > > > threads: required=4 < max=4 for
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > > {s=0/1,p=0}]
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > > > > at org.eclipse.jetty.io
> > > > > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > > > > at
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > > > > at
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > > > > at
> > > > >
> > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > > > > ... 21 more
> > > > > > >
> > > > > > > Chris Nauroth
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org>
> > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > Thanks for rebuilding this RC, Denys.
> > > > > > > >
> > > > > > > > Alessandro: IMHO since there was no vote cast yet and we're
> > > talking
> > > > > > about
> > > > > > > > a build option change only, I guess it just doesn't worth
> > > > rebuilding
> > > > > > the
> > > > > > > > whole stuff from scratch to create a new RC.
> > > > > > > >
> > > > > > > > I give +1 (binding) to this RC, I verified the checksum,
> binary
> > > > > > content,
> > > > > > > > source content, built Hive from source and also tried out the
> > > > > artifacts
> > > > > > > in
> > > > > > > > a mini cluster environment. Built an HMS DB with the schema
> > > scripts
> > > > > > > > provided, did table creation, insert, delete, rollback
> > (Iceberg).
> > > > > > > >
> > > > > > > > Thanks again, Denys for taking this up.
> > > > > > > >
> > > > > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > > > > Hi Denys,
> > > > > > > > > in other Apache communities I generally see that votes are
> > > > > cancelled
> > > > > > > and
> > > > > > > > a
> > > > > > > > > new RC is prepared when there are changes or blocking
> issues
> > > like
> > > > > in
> > > > > > > this
> > > > > > > > > case, not sure how things are done in Hive though.
> > > > > > > > >
> > > > > > > > > Best regards,
> > > > > > > > > Alessandro
> > > > > > > > >
> > > > > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > > > > dkuzmenko@cloudera.com
> > > > > > > > .invalid>
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Adam,
> > > > > > > > > >
> > > > > > > > > > Thanks for pointing that out! Upstream release guide is
> > > > outdated.
> > > > > > > Once
> > > > > > > > I
> > > > > > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > > > > > Updated the release artifacts and checksums:
> > > > > > > > > >
> > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> available
> > > > > > > > > > here:
> > > > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > The checksums are these:
> > > > > > > > > > -
> > > > > b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > -
> > > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > >
> > > > > > > > > > Maven artifacts are available
> > > > > > > > > > here:
> > > > > > > > > >
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > >
> > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> > > > source
> > > > > > for
> > > > > > > > > > this release in github, you can see it at
> > > > > > > > > >
> > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > >
> > > > > > > > > > The git commit hash
> > > > > > > > > > is:
> > > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Please check again.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Denys
> > > > > > > > > >
> > > > > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <
> > szita@apache.org
> > > >
> > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Denys,
> > > > > > > > > > >
> > > > > > > > > > > Unfortutantely I can't give a plus 1 on this yet, as
> the
> > > > > Iceberg
> > > > > > > > > > artifacts
> > > > > > > > > > > are missing from the binary tar.gz. Perhaps -Piceberg
> > flag
> > > > was
> > > > > > > > missing
> > > > > > > > > > > during build, can you please rebuild?
> > > > > > > > > > >
> > > > > > > > > > > Thanks,
> > > > > > > > > > > Adam
> > > > > > > > > > >
> > > > > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > > > > Hi team,
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > > available
> > > > > > > > > > > > here:
> > > > > > > >
> > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > The checksums are these:
> > > > > > > > > > > > -
> > > > > > >
> 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > > -
> > > > > > >
> 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > > >
> > > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > > here:
> > > > > > > > > > >
> > > > > > > >
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > > >
> > > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to
> > the
> > > > > > source
> > > > > > > > for
> > > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > > >
> > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > > >
> > > > > > > > > > > > The git commit hash
> > > > > > > > > > > > is:
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > > >
> > > > > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > > > > >
> > > > > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Denys Kuzmenko <dk...@cloudera.com.INVALID>.
Extending voting for 24hr. 1 more +1 is needed from the PMC to promote the
release.
If not given, I'll be closing this vote as unsuccessful.

On Thu, Oct 27, 2022 at 11:16 PM Chris Nauroth <cn...@apache.org> wrote:

> +1 (non-binding)
>
> * Verified all checksums.
> * Verified all signatures.
> * Built from source.
>     * mvn clean install -Piceberg -DskipTests
> * Tests passed.
>     * mvn --fail-never clean verify -Piceberg -Pitests
> -Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'
>
> I figured out why my test runs were failing in HTTP server initialization.
> Jetty enforces thread leasing to warn or abort if there aren't enough
> threads available [1]. During startup, it attempts to lease a thread per
> NIO selector [2]. By default, the number of NIO selectors to use is
> determined based on available CPUs [3]. This is mostly a passthrough to
> Runtime.availableProcessors() [4]. In my case, running on a machine with 16
> CPUs, this ended up creating more than 4 selectors, therefore requiring
> more than 4 threads and violating the lease check. I was able to work
> around this by passing the JETTY_AVAILABLE_PROCESSORS system property to
> constrain the number of CPUs available to Jetty.
>
> If we are intentionally constraining the pool to 4 threads during itests,
> then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS in
> maven.test.jvm.args of the root pom.xml, so that others don't run into this
> problem later? If so, I'll send a pull request.
>
> [1]
>
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
> [2]
>
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
> [3]
>
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
> [4]
>
> https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45
>
> Chris Nauroth
>
>
> On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
> alessandro.solimando@gmail.com> wrote:
>
> > You are right Ayush, I got sidetracked by the release notes (*
> [HIVE-19217]
> > - Upgrade to Hadoop 3.1.0) and I did not check the versions in the pom
> > file, apologies for the false alarm but better safe than sorry.
> >
> > With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2), tests
> > including select, join, groupby, orderby, explain (ast, cbo, cbo cost,
> > vectorization) are working correctly, against data in ORC and parquet
> > file format.
> >
> > No problem for me either when running TestBeelinePasswordOption locally.
> >
> > So my vote turns into a +1 (non-binding).
> >
> > Thanks a lot Denys for pushing the release process forward, sorry again
> you
> > all for the oversight!
> >
> > Best regards,
> > Alessandro
> >
> > On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com> wrote:
> >
> > > Hi Alessandro,
> > > From this:
> > >
> > > > $ sw hadoop 3.1.0
> > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > >
> > >
> > > I guess you are using the wrong versions, The Hadoop version to be used
> > > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> > >
> > > The error also seems to be coming from Hadoop code
> > >
> > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >
> > >
> > > The compareTo method in Hadoop was changed in HADOOP-16196, which isn't
> > > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> > >
> > > Another stuff, TestBeelinePasswordOptions passes for me inside the
> source
> > > directory.
> > >
> > > [*INFO*] -------------------------------------------------------
> > >
> > > [*INFO*]  T E S T S
> > >
> > > [*INFO*] -------------------------------------------------------
> > >
> > > [*INFO*] Running org.apache.hive.beeline.*TestBeelinePasswordOption*
> > >
> > > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0, Time
> > elapsed:
> > > 18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*
> > >
> > > -Ayush
> > >
> > > [1]
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > > [2]
> > >
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> > >
> > > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > > alessandro.solimando@gmail.com> wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > unfortunately my vote is -1 (although non-binding) due to a classpath
> > > error
> > > > which prevents queries involving Tez to complete (all the details at
> > the
> > > > end of the email, apologies for the lengthy text but I wanted to
> > provide
> > > > all the context).
> > > >
> > > > - verified gpg signature: OK
> > > >
> > > > $ wget https://www.apache.org/dist/hive/KEYS
> > > >
> > > > $ gpg --import KEYS
> > > >
> > > > ...
> > > >
> > > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > >
> > > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > > >
> > > > gpg:                using RSA key
> > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > >
> > > > gpg:                issuer "dkuzmenko@apache.org"
> > > >
> > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > dkuzmenko@apache.org>" [unknown]
> > > >
> > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > signature!
> > > >
> > > > gpg:          There is no indication that the signature belongs to
> the
> > > > owner.
> > > >
> > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D
> AFC7
> > > 3125
> > > >
> > > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > >
> > > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > > >
> > > > gpg:                using RSA key
> > > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > > >
> > > > gpg:                issuer "dkuzmenko@apache.org"
> > > >
> > > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > > dkuzmenko@apache.org>" [unknown]
> > > >
> > > > gpg: WARNING: The key's User ID is not certified with a trusted
> > > signature!
> > > >
> > > > gpg:          There is no indication that the signature belongs to
> the
> > > > owner.
> > > >
> > > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D
> AFC7
> > > 3125
> > > >
> > > > (AFAIK, this warning is OK)
> > > >
> > > > - verified package checksum: OK
> > > >
> > > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256) <(shasum -a
> > 256
> > > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > > >
> > > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256) <(shasum -a
> > 256
> > > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > > >
> > > > - verified maven build (no tests): OK
> > > >
> > > > $ mvn clean install -DskipTests
> > > >
> > > > ...
> > > >
> > > > [INFO]
> > > >
> > ------------------------------------------------------------------------
> > > >
> > > > [INFO] BUILD SUCCESS
> > > >
> > > > [INFO]
> > > >
> > ------------------------------------------------------------------------
> > > >
> > > > [INFO] Total time:  04:31 min
> > > >
> > > > - checked release notes: OK
> > > >
> > > > - checked few modules in Nexus: OK
> > > >
> > > > - environment used:
> > > >
> > > > $ sw_vers
> > > >
> > > > ProductName: macOS
> > > >
> > > > ProductVersion: 11.6.8
> > > >
> > > > BuildVersion: 20G730
> > > >
> > > > $ mvn --version
> > > >
> > > > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > > >
> > > > Maven home: .../.sdkman/candidates/maven/current
> > > >
> > > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > > >
> > > > Default locale: en_IE, platform encoding: UTF-8
> > > >
> > > > OS name: "mac os x", version: "10.16", arch: "x86_64", family: "mac"
> > > >
> > > > $ java -version
> > > >
> > > > openjdk version "1.8.0_292"
> > > >
> > > > OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
> > > >
> > > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed mode)
> > > >
> > > >
> > > > Testing in hive-dev-box (https://github.com/kgyrtkirk/hive-dev-box):
> > KO
> > > >
> > > > This is the setup I have used:
> > > >
> > > > $ sw hadoop 3.1.0
> > > >
> > > > $ sw tez 0.10.0 (tried also 0.10.1)
> > > >
> > > > $ sw hive
> > > >
> > > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > >
> > > > In what follows the test data and query I have tried, with associated
> > > > stacktrace for the error. It seems a classpath issue, probably there
> > are
> > > > multiple versions of the class ending up in the CP and the
> classloader
> > > > happened to load the “wrong one”.
> > > >
> > > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > > >
> > > >
> > > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > > >
> > > >
> > > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > > >
> > > >
> > > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > > >
> > > >
> > > >
> > > > CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
> > > >
> > > >
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > > >
> > > >
> > > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a)
> > WHERE
> > > > > t1.b < 3 AND t2.b > 1;
> > > >
> > > >
> > > > INFO  : Completed compiling
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > Time taken:
> > > > > 4.171 seconds
> > > > > INFO  : Operation QUERY obtained 0 locks
> > > > > INFO  : Executing
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > > SELECT * FROM test_sta
> > > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND
> t2.b
> > > > 1
> > > > > INFO  : Query ID =
> > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > INFO  : Total jobs = 1
> > > > > INFO  : Launching Job 1 out of 1
> > > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > > DEBUG : Task getting executed using mapred tag :
> > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > > v
> > > > > INFO  : Subscribed to counters: [] for queryId:
> > > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > > INFO  : Tez session hasn't been created yet. Opening session
> > > > > DEBUG : No local resources to process (other than hive-exec)
> > > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1
> > > (Stage-1)
> > > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> > > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > > DEBUG : Setting Tez DAG access for
> > > > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> with
> > > > > viewAclStr
> > > > > ing=dev, modifyStr=dev
> > > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > > 0.30000001192092896
> > > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag ID:
> > > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > > [application_1666888075798_0001]
> > > > > INFO  : Status: Running (Executing on YARN cluster with App id
> > > > > application_1666888075798_0001)
> > > >
> > > >
> > > > > ERROR : Status: Failed
> > > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > >         at
> > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > >
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > >
> > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > ]
> > > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > >         at
> > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > >
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > ]
> > > > > ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > killedVertices:0
> > > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > > vertexName=Map
> > > > > 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > >         at
> > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > ]Vertex failed, vertexName=Map 1,
> > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > killed/failed
> > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> failed,
> > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > >         at
> > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > killedVertices:0
> > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > INFO  : Completed executing
> > > > >
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > > Time taken: 6.983 seconds
> > > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > > Error: Error while compiling statement: FAILED: Execution Error,
> > return
> > > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex
> > failed,
> > > > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> > > killed/failed
> > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer
> failed,
> > > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > >         at
> > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > ]Vertex failed, vertexName=Map 1,
> > > > vertexId=vertex_1666888075798_0001_1_00,
> > > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > > killed/failed
> > > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer
> failed,
> > > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > > java.lang.NoSuchMethodError:
> > > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > > >         at
> > java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > > >         at java.security.AccessController.doPrivileged(Native
> Method)
> > > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > > >         at
> > > > >
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > > >         at
> > > > >
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > > >         at java.lang.Thread.run(Thread.java:748)
> > > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > > killedVertices:0 (state=08S01,code=2)
> > > >
> > > >         at
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > > >
> > > >
> > > > Best regards,
> > > > Alessandro
> > > >
> > > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com>
> wrote:
> > > >
> > > > > Chris,
> > > > > The KEYS file is at:
> > > > > https://downloads.apache.org/hive/KEYS
> > > > >
> > > > > -Ayush
> > > > >
> > > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cn...@apache.org>
> > > wrote:
> > > > >
> > > > > > Could someone please point me toward the right KEYS file to
> import
> > so
> > > > > that
> > > > > > I can verify signatures? Thanks!
> > > > > >
> > > > > > I'm seeing numerous test failures due to "Insufficient configured
> > > > > threads"
> > > > > > while trying to start the HTTP server. One example is
> > > > > > TestBeelinePasswordOption. Is anyone else seeing this? I noticed
> > that
> > > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > > /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
> > > > > >
> > > > > > [INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> > > elapsed:
> > > > > > 11.742 s <<< FAILURE! - in
> > > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time
> > > > elapsed:
> > > > > > 11.733 s  <<< ERROR!
> > > > > > org.apache.hive.service.ServiceException:
> > > > > java.lang.IllegalStateException:
> > > > > > Insufficient configured threads: required=4 < max=4 for
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > {s=0/1,p=0}]
> > > > > > at
> > > >
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > > at
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > > at
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > > at
> > > > > >
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > > Caused by: java.lang.IllegalStateException: Insufficient
> configured
> > > > > > threads: required=4 < max=4 for
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > > {s=0/1,p=0}]
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > > > at org.eclipse.jetty.io
> > > > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > > > at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > > > at
> > > >
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > > > ... 21 more
> > > > > >
> > > > > > Chris Nauroth
> > > > > >
> > > > > >
> > > > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org>
> > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > Thanks for rebuilding this RC, Denys.
> > > > > > >
> > > > > > > Alessandro: IMHO since there was no vote cast yet and we're
> > talking
> > > > > about
> > > > > > > a build option change only, I guess it just doesn't worth
> > > rebuilding
> > > > > the
> > > > > > > whole stuff from scratch to create a new RC.
> > > > > > >
> > > > > > > I give +1 (binding) to this RC, I verified the checksum, binary
> > > > > content,
> > > > > > > source content, built Hive from source and also tried out the
> > > > artifacts
> > > > > > in
> > > > > > > a mini cluster environment. Built an HMS DB with the schema
> > scripts
> > > > > > > provided, did table creation, insert, delete, rollback
> (Iceberg).
> > > > > > >
> > > > > > > Thanks again, Denys for taking this up.
> > > > > > >
> > > > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > > > Hi Denys,
> > > > > > > > in other Apache communities I generally see that votes are
> > > > cancelled
> > > > > > and
> > > > > > > a
> > > > > > > > new RC is prepared when there are changes or blocking issues
> > like
> > > > in
> > > > > > this
> > > > > > > > case, not sure how things are done in Hive though.
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > > Alessandro
> > > > > > > >
> > > > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > > > dkuzmenko@cloudera.com
> > > > > > > .invalid>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Adam,
> > > > > > > > >
> > > > > > > > > Thanks for pointing that out! Upstream release guide is
> > > outdated.
> > > > > > Once
> > > > > > > I
> > > > > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > > > > Updated the release artifacts and checksums:
> > > > > > > > >
> > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > > > > here:
> > > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > The checksums are these:
> > > > > > > > > -
> > > > b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > -
> > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > >
> > > > > > > > > Maven artifacts are available
> > > > > > > > > here:
> > > > > > > > >
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > >
> > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> > > source
> > > > > for
> > > > > > > > > this release in github, you can see it at
> > > > > > > > >
> > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > >
> > > > > > > > > The git commit hash
> > > > > > > > > is:
> > > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Please check again.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Denys
> > > > > > > > >
> > > > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <
> szita@apache.org
> > >
> > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Denys,
> > > > > > > > > >
> > > > > > > > > > Unfortutantely I can't give a plus 1 on this yet, as the
> > > > Iceberg
> > > > > > > > > artifacts
> > > > > > > > > > are missing from the binary tar.gz. Perhaps -Piceberg
> flag
> > > was
> > > > > > > missing
> > > > > > > > > > during build, can you please rebuild?
> > > > > > > > > >
> > > > > > > > > > Thanks,
> > > > > > > > > > Adam
> > > > > > > > > >
> > > > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > > > Hi team,
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> > available
> > > > > > > > > > > here:
> > > > > > >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > The checksums are these:
> > > > > > > > > > > -
> > > > > > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > > -
> > > > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > > >
> > > > > > > > > > > Maven artifacts are available
> > > > > > > > > > > here:
> > > > > > > > > >
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > > >
> > > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to
> the
> > > > > source
> > > > > > > for
> > > > > > > > > > > this release in github, you can see it at
> > > > > > > > > > >
> > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > > >
> > > > > > > > > > > The git commit hash
> > > > > > > > > > > is:
> > > > > > > > > >
> > > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > > >
> > > > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > > > >
> > > > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Chris Nauroth <cn...@apache.org>.
+1 (non-binding)

* Verified all checksums.
* Verified all signatures.
* Built from source.
    * mvn clean install -Piceberg -DskipTests
* Tests passed.
    * mvn --fail-never clean verify -Piceberg -Pitests
-Dmaven.test.jvm.args='-Xmx2048m -DJETTY_AVAILABLE_PROCESSORS=4'

I figured out why my test runs were failing in HTTP server initialization.
Jetty enforces thread leasing to warn or abort if there aren't enough
threads available [1]. During startup, it attempts to lease a thread per
NIO selector [2]. By default, the number of NIO selectors to use is
determined based on available CPUs [3]. This is mostly a passthrough to
Runtime.availableProcessors() [4]. In my case, running on a machine with 16
CPUs, this ended up creating more than 4 selectors, therefore requiring
more than 4 threads and violating the lease check. I was able to work
around this by passing the JETTY_AVAILABLE_PROCESSORS system property to
constrain the number of CPUs available to Jetty.

If we are intentionally constraining the pool to 4 threads during itests,
then would it also make sense to limit JETTY_AVAILABLE_PROCESSORS in
maven.test.jvm.args of the root pom.xml, so that others don't run into this
problem later? If so, I'll send a pull request.

[1]
https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/thread/ThreadPoolBudget.java#L165
[2]
https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L255
[3]
https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-io/src/main/java/org/eclipse/jetty/io/SelectorManager.java#L79
[4]
https://github.com/eclipse/jetty.project/blob/jetty-9.4.40.v20210413/jetty-util/src/main/java/org/eclipse/jetty/util/ProcessorUtils.java#L45

Chris Nauroth


On Thu, Oct 27, 2022 at 1:18 PM Alessandro Solimando <
alessandro.solimando@gmail.com> wrote:

> You are right Ayush, I got sidetracked by the release notes (* [HIVE-19217]
> - Upgrade to Hadoop 3.1.0) and I did not check the versions in the pom
> file, apologies for the false alarm but better safe than sorry.
>
> With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2), tests
> including select, join, groupby, orderby, explain (ast, cbo, cbo cost,
> vectorization) are working correctly, against data in ORC and parquet
> file format.
>
> No problem for me either when running TestBeelinePasswordOption locally.
>
> So my vote turns into a +1 (non-binding).
>
> Thanks a lot Denys for pushing the release process forward, sorry again you
> all for the oversight!
>
> Best regards,
> Alessandro
>
> On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com> wrote:
>
> > Hi Alessandro,
> > From this:
> >
> > > $ sw hadoop 3.1.0
> > > $ sw tez 0.10.0 (tried also 0.10.1)
> >
> >
> > I guess you are using the wrong versions, The Hadoop version to be used
> > should be 3.3.1[1] and the Tez version should be 0.10.2[2]
> >
> > The error also seems to be coming from Hadoop code
> >
> > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >
> >
> > The compareTo method in Hadoop was changed in HADOOP-16196, which isn't
> > there in Hadoop-3.1.0, it is there post 3.2.1 [3]
> >
> > Another stuff, TestBeelinePasswordOptions passes for me inside the source
> > directory.
> >
> > [*INFO*] -------------------------------------------------------
> >
> > [*INFO*]  T E S T S
> >
> > [*INFO*] -------------------------------------------------------
> >
> > [*INFO*] Running org.apache.hive.beeline.*TestBeelinePasswordOption*
> >
> > [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0, Time
> elapsed:
> > 18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*
> >
> > -Ayush
> >
> > [1]
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> > [2]
> >
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> > [3] https://issues.apache.org/jira/browse/HADOOP-16196
> >
> > On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> > alessandro.solimando@gmail.com> wrote:
> >
> > > Hi everyone,
> > >
> > > unfortunately my vote is -1 (although non-binding) due to a classpath
> > error
> > > which prevents queries involving Tez to complete (all the details at
> the
> > > end of the email, apologies for the lengthy text but I wanted to
> provide
> > > all the context).
> > >
> > > - verified gpg signature: OK
> > >
> > > $ wget https://www.apache.org/dist/hive/KEYS
> > >
> > > $ gpg --import KEYS
> > >
> > > ...
> > >
> > > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > >
> > > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> > >
> > > gpg:                using RSA key
> > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > >
> > > gpg:                issuer "dkuzmenko@apache.org"
> > >
> > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > dkuzmenko@apache.org>" [unknown]
> > >
> > > gpg: WARNING: The key's User ID is not certified with a trusted
> > signature!
> > >
> > > gpg:          There is no indication that the signature belongs to the
> > > owner.
> > >
> > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7
> > 3125
> > >
> > > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > >
> > > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> > >
> > > gpg:                using RSA key
> > 50606DE1BDBD5CF862A595A907C5682DAFC73125
> > >
> > > gpg:                issuer "dkuzmenko@apache.org"
> > >
> > > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > > dkuzmenko@apache.org>" [unknown]
> > >
> > > gpg: WARNING: The key's User ID is not certified with a trusted
> > signature!
> > >
> > > gpg:          There is no indication that the signature belongs to the
> > > owner.
> > >
> > > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7
> > 3125
> > >
> > > (AFAIK, this warning is OK)
> > >
> > > - verified package checksum: OK
> > >
> > > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256) <(shasum -a
> 256
> > > apache-hive-4.0.0-alpha-2-src.tar.gz)
> > >
> > > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256) <(shasum -a
> 256
> > > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> > >
> > > - verified maven build (no tests): OK
> > >
> > > $ mvn clean install -DskipTests
> > >
> > > ...
> > >
> > > [INFO]
> > >
> ------------------------------------------------------------------------
> > >
> > > [INFO] BUILD SUCCESS
> > >
> > > [INFO]
> > >
> ------------------------------------------------------------------------
> > >
> > > [INFO] Total time:  04:31 min
> > >
> > > - checked release notes: OK
> > >
> > > - checked few modules in Nexus: OK
> > >
> > > - environment used:
> > >
> > > $ sw_vers
> > >
> > > ProductName: macOS
> > >
> > > ProductVersion: 11.6.8
> > >
> > > BuildVersion: 20G730
> > >
> > > $ mvn --version
> > >
> > > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> > >
> > > Maven home: .../.sdkman/candidates/maven/current
> > >
> > > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> > >
> > > Default locale: en_IE, platform encoding: UTF-8
> > >
> > > OS name: "mac os x", version: "10.16", arch: "x86_64", family: "mac"
> > >
> > > $ java -version
> > >
> > > openjdk version "1.8.0_292"
> > >
> > > OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
> > >
> > > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed mode)
> > >
> > >
> > > Testing in hive-dev-box (https://github.com/kgyrtkirk/hive-dev-box):
> KO
> > >
> > > This is the setup I have used:
> > >
> > > $ sw hadoop 3.1.0
> > >
> > > $ sw tez 0.10.0 (tried also 0.10.1)
> > >
> > > $ sw hive
> > >
> > >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> > >
> > > In what follows the test data and query I have tried, with associated
> > > stacktrace for the error. It seems a classpath issue, probably there
> are
> > > multiple versions of the class ending up in the CP and the classloader
> > > happened to load the “wrong one”.
> > >
> > > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> > >
> > >
> > > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> > >
> > >
> > > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> > >
> > >
> > > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> > >
> > >
> > >
> > > CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
> > >
> > >
> > > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> > >
> > >
> > > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a)
> WHERE
> > > > t1.b < 3 AND t2.b > 1;
> > >
> > >
> > > INFO  : Completed compiling
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > Time taken:
> > > > 4.171 seconds
> > > > INFO  : Operation QUERY obtained 0 locks
> > > > INFO  : Executing
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > > SELECT * FROM test_sta
> > > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b
> > > 1
> > > > INFO  : Query ID =
> > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > INFO  : Total jobs = 1
> > > > INFO  : Launching Job 1 out of 1
> > > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > > DEBUG : Task getting executed using mapred tag :
> > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > > v
> > > > INFO  : Subscribed to counters: [] for queryId:
> > > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > > INFO  : Tez session hasn't been created yet. Opening session
> > > > DEBUG : No local resources to process (other than hive-exec)
> > > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1
> > (Stage-1)
> > > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> > > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > > DEBUG : Setting Tez DAG access for
> > > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b with
> > > > viewAclStr
> > > > ing=dev, modifyStr=dev
> > > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > > 0.30000001192092896
> > > > INFO  : HS2 Host: [alpha2], Query ID:
> > > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag ID:
> > > > [dag_1666888075798_0001_1], DAG Session ID:
> > > [application_1666888075798_0001]
> > > > INFO  : Status: Running (Executing on YARN cluster with App id
> > > > application_1666888075798_0001)
> > >
> > >
> > > > ERROR : Status: Failed
> > > > ERROR : Vertex failed, vertexName=Map 2,
> > > > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >
> > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > >         at
> java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > >
> > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > >
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > >
> > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >
> > >         at java.lang.Thread.run(Thread.java:748)
> > > > ]
> > > > ERROR : Vertex failed, vertexName=Map 1,
> > > > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >
> > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > >         at
> java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > >
> > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > >
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >
> > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > >         at java.lang.Thread.run(Thread.java:748)
> > > > ]
> > > > ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > killedVertices:0
> > > > ERROR : FAILED: Execution Error, return code 2 from
> > > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > vertexName=Map
> > > > 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >
> > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > >         at
> java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > >
> > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > >
> > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > >         at java.lang.Thread.run(Thread.java:748)
> > > > ]Vertex failed, vertexName=Map 1,
> > > vertexId=vertex_1666888075798_0001_1_00,
> > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > killed/failed
> > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > >         at
> java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > >         at java.lang.Thread.run(Thread.java:748)
> > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > killedVertices:0
> > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > INFO  : Completed executing
> > > >
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > > Time taken: 6.983 seconds
> > > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > > Error: Error while compiling statement: FAILED: Execution Error,
> return
> > > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex
> failed,
> > > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> > killed/failed
> > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > >         at
> java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > >         at java.lang.Thread.run(Thread.java:748)
> > > > ]Vertex failed, vertexName=Map 1,
> > > vertexId=vertex_1666888075798_0001_1_00,
> > > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> > killed/failed
> > > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > > java.lang.NoSuchMethodError:
> > > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > > >         at
> java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > > >         at java.util.TimSort.sort(TimSort.java:220)
> > > >         at java.util.Arrays.sort(Arrays.java:1438)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > > >         at
> > > >
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > > >         at
> > > >
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > > >         at java.lang.Thread.run(Thread.java:748)
> > > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > > killedVertices:0 (state=08S01,code=2)
> > >
> > >         at
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > > >
> > >
> > > Best regards,
> > > Alessandro
> > >
> > > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com> wrote:
> > >
> > > > Chris,
> > > > The KEYS file is at:
> > > > https://downloads.apache.org/hive/KEYS
> > > >
> > > > -Ayush
> > > >
> > > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cn...@apache.org>
> > wrote:
> > > >
> > > > > Could someone please point me toward the right KEYS file to import
> so
> > > > that
> > > > > I can verify signatures? Thanks!
> > > > >
> > > > > I'm seeing numerous test failures due to "Insufficient configured
> > > > threads"
> > > > > while trying to start the HTTP server. One example is
> > > > > TestBeelinePasswordOption. Is anyone else seeing this? I noticed
> that
> > > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > > /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
> > > > >
> > > > > [INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> > elapsed:
> > > > > 11.742 s <<< FAILURE! - in
> > > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time
> > > elapsed:
> > > > > 11.733 s  <<< ERROR!
> > > > > org.apache.hive.service.ServiceException:
> > > > java.lang.IllegalStateException:
> > > > > Insufficient configured threads: required=4 < max=4 for
> > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > {s=0/1,p=0}]
> > > > > at
> > > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > > at
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > > at
> > > > >
> > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > > Caused by: java.lang.IllegalStateException: Insufficient configured
> > > > > threads: required=4 < max=4 for
> > > > >
> > > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > > {s=0/1,p=0}]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > > at org.eclipse.jetty.io
> > > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > > at
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > > at
> > > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > > ... 21 more
> > > > >
> > > > > Chris Nauroth
> > > > >
> > > > >
> > > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org>
> wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Thanks for rebuilding this RC, Denys.
> > > > > >
> > > > > > Alessandro: IMHO since there was no vote cast yet and we're
> talking
> > > > about
> > > > > > a build option change only, I guess it just doesn't worth
> > rebuilding
> > > > the
> > > > > > whole stuff from scratch to create a new RC.
> > > > > >
> > > > > > I give +1 (binding) to this RC, I verified the checksum, binary
> > > > content,
> > > > > > source content, built Hive from source and also tried out the
> > > artifacts
> > > > > in
> > > > > > a mini cluster environment. Built an HMS DB with the schema
> scripts
> > > > > > provided, did table creation, insert, delete, rollback (Iceberg).
> > > > > >
> > > > > > Thanks again, Denys for taking this up.
> > > > > >
> > > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > > Hi Denys,
> > > > > > > in other Apache communities I generally see that votes are
> > > cancelled
> > > > > and
> > > > > > a
> > > > > > > new RC is prepared when there are changes or blocking issues
> like
> > > in
> > > > > this
> > > > > > > case, not sure how things are done in Hive though.
> > > > > > >
> > > > > > > Best regards,
> > > > > > > Alessandro
> > > > > > >
> > > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > > dkuzmenko@cloudera.com
> > > > > > .invalid>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi Adam,
> > > > > > > >
> > > > > > > > Thanks for pointing that out! Upstream release guide is
> > outdated.
> > > > > Once
> > > > > > I
> > > > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > > > Updated the release artifacts and checksums:
> > > > > > > >
> > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > > > here:
> > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > >
> > > > > > > >
> > > > > > > > The checksums are these:
> > > > > > > > -
> > > b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > -
> > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > >
> > > > > > > > Maven artifacts are available
> > > > > > > > here:
> > > > > > > >
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > >
> > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> > source
> > > > for
> > > > > > > > this release in github, you can see it at
> > > > > > > >
> https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > >
> > > > > > > > The git commit hash
> > > > > > > > is:
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > >
> > > > > > > >
> > > > > > > > Please check again.
> > > > > > > >
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Denys
> > > > > > > >
> > > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <szita@apache.org
> >
> > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Denys,
> > > > > > > > >
> > > > > > > > > Unfortutantely I can't give a plus 1 on this yet, as the
> > > Iceberg
> > > > > > > > artifacts
> > > > > > > > > are missing from the binary tar.gz. Perhaps -Piceberg flag
> > was
> > > > > > missing
> > > > > > > > > during build, can you please rebuild?
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > Adam
> > > > > > > > >
> > > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > > Hi team,
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is
> available
> > > > > > > > > > here:
> > > > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > The checksums are these:
> > > > > > > > > > -
> > > > > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > > -
> > > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > > >
> > > > > > > > > > Maven artifacts are available
> > > > > > > > > > here:
> > > > > > > > >
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > > >
> > > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> > > > source
> > > > > > for
> > > > > > > > > > this release in github, you can see it at
> > > > > > > > > >
> > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > > >
> > > > > > > > > > The git commit hash
> > > > > > > > > > is:
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > > >
> > > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > > >
> > > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Alessandro Solimando <al...@gmail.com>.
You are right Ayush, I got sidetracked by the release notes (* [HIVE-19217]
- Upgrade to Hadoop 3.1.0) and I did not check the versions in the pom
file, apologies for the false alarm but better safe than sorry.

With the right versions in place (Hadoop 3.3.1 and Tez 10.0.2), tests
including select, join, groupby, orderby, explain (ast, cbo, cbo cost,
vectorization) are working correctly, against data in ORC and parquet
file format.

No problem for me either when running TestBeelinePasswordOption locally.

So my vote turns into a +1 (non-binding).

Thanks a lot Denys for pushing the release process forward, sorry again you
all for the oversight!

Best regards,
Alessandro

On Thu, 27 Oct 2022 at 20:03, Ayush Saxena <ay...@gmail.com> wrote:

> Hi Alessandro,
> From this:
>
> > $ sw hadoop 3.1.0
> > $ sw tez 0.10.0 (tried also 0.10.1)
>
>
> I guess you are using the wrong versions, The Hadoop version to be used
> should be 3.3.1[1] and the Tez version should be 0.10.2[2]
>
> The error also seems to be coming from Hadoop code
>
> > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>
>
> The compareTo method in Hadoop was changed in HADOOP-16196, which isn't
> there in Hadoop-3.1.0, it is there post 3.2.1 [3]
>
> Another stuff, TestBeelinePasswordOptions passes for me inside the source
> directory.
>
> [*INFO*] -------------------------------------------------------
>
> [*INFO*]  T E S T S
>
> [*INFO*] -------------------------------------------------------
>
> [*INFO*] Running org.apache.hive.beeline.*TestBeelinePasswordOption*
>
> [*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
> 18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*
>
> -Ayush
>
> [1]
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
> [2]
> https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
> [3] https://issues.apache.org/jira/browse/HADOOP-16196
>
> On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
> alessandro.solimando@gmail.com> wrote:
>
> > Hi everyone,
> >
> > unfortunately my vote is -1 (although non-binding) due to a classpath
> error
> > which prevents queries involving Tez to complete (all the details at the
> > end of the email, apologies for the lengthy text but I wanted to provide
> > all the context).
> >
> > - verified gpg signature: OK
> >
> > $ wget https://www.apache.org/dist/hive/KEYS
> >
> > $ gpg --import KEYS
> >
> > ...
> >
> > $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> > apache-hive-4.0.0-alpha-2-bin.tar.gz
> >
> > gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
> >
> > gpg:                using RSA key
> 50606DE1BDBD5CF862A595A907C5682DAFC73125
> >
> > gpg:                issuer "dkuzmenko@apache.org"
> >
> > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > dkuzmenko@apache.org>" [unknown]
> >
> > gpg: WARNING: The key's User ID is not certified with a trusted
> signature!
> >
> > gpg:          There is no indication that the signature belongs to the
> > owner.
> >
> > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7
> 3125
> >
> > $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> > apache-hive-4.0.0-alpha-2-src.tar.gz
> >
> > gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
> >
> > gpg:                using RSA key
> 50606DE1BDBD5CF862A595A907C5682DAFC73125
> >
> > gpg:                issuer "dkuzmenko@apache.org"
> >
> > gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> > dkuzmenko@apache.org>" [unknown]
> >
> > gpg: WARNING: The key's User ID is not certified with a trusted
> signature!
> >
> > gpg:          There is no indication that the signature belongs to the
> > owner.
> >
> > Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7
> 3125
> >
> > (AFAIK, this warning is OK)
> >
> > - verified package checksum: OK
> >
> > $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256) <(shasum -a 256
> > apache-hive-4.0.0-alpha-2-src.tar.gz)
> >
> > $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256) <(shasum -a 256
> > apache-hive-4.0.0-alpha-2-bin.tar.gz)
> >
> > - verified maven build (no tests): OK
> >
> > $ mvn clean install -DskipTests
> >
> > ...
> >
> > [INFO]
> > ------------------------------------------------------------------------
> >
> > [INFO] BUILD SUCCESS
> >
> > [INFO]
> > ------------------------------------------------------------------------
> >
> > [INFO] Total time:  04:31 min
> >
> > - checked release notes: OK
> >
> > - checked few modules in Nexus: OK
> >
> > - environment used:
> >
> > $ sw_vers
> >
> > ProductName: macOS
> >
> > ProductVersion: 11.6.8
> >
> > BuildVersion: 20G730
> >
> > $ mvn --version
> >
> > Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
> >
> > Maven home: .../.sdkman/candidates/maven/current
> >
> > Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> > .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
> >
> > Default locale: en_IE, platform encoding: UTF-8
> >
> > OS name: "mac os x", version: "10.16", arch: "x86_64", family: "mac"
> >
> > $ java -version
> >
> > openjdk version "1.8.0_292"
> >
> > OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
> >
> > OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed mode)
> >
> >
> > Testing in hive-dev-box (https://github.com/kgyrtkirk/hive-dev-box): KO
> >
> > This is the setup I have used:
> >
> > $ sw hadoop 3.1.0
> >
> > $ sw tez 0.10.0 (tried also 0.10.1)
> >
> > $ sw hive
> >
> >
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
> >
> > In what follows the test data and query I have tried, with associated
> > stacktrace for the error. It seems a classpath issue, probably there are
> > multiple versions of the class ending up in the CP and the classloader
> > happened to load the “wrong one”.
> >
> > CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
> >
> >
> > > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
> >
> >
> > > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
> >
> >
> > > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
> >
> >
> >
> > CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
> >
> >
> > > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
> >
> >
> > SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE
> > > t1.b < 3 AND t2.b > 1;
> >
> >
> > INFO  : Completed compiling
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > Time taken:
> > > 4.171 seconds
> > > INFO  : Operation QUERY obtained 0 locks
> > > INFO  : Executing
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > > SELECT * FROM test_sta
> > > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b
> > 1
> > > INFO  : Query ID =
> > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > INFO  : Total jobs = 1
> > > INFO  : Launching Job 1 out of 1
> > > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > > DEBUG : Task getting executed using mapred tag :
> > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > > v
> > > INFO  : Subscribed to counters: [] for queryId:
> > > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > > INFO  : Tez session hasn't been created yet. Opening session
> > > DEBUG : No local resources to process (other than hive-exec)
> > > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1
> (Stage-1)
> > > DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> > > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > > DEBUG : Setting Tez DAG access for
> > > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b with
> > > viewAclStr
> > > ing=dev, modifyStr=dev
> > > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > > 0.30000001192092896
> > > INFO  : HS2 Host: [alpha2], Query ID:
> > > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag ID:
> > > [dag_1666888075798_0001_1], DAG Session ID:
> > [application_1666888075798_0001]
> > > INFO  : Status: Running (Executing on YARN cluster with App id
> > > application_1666888075798_0001)
> >
> >
> > > ERROR : Status: Failed
> > > ERROR : Vertex failed, vertexName=Map 2,
> > > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >
> >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > >         at java.util.TimSort.sort(TimSort.java:220)
> > >         at java.util.Arrays.sort(Arrays.java:1438)
> >
> >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> >
> >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> >
> >         at
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >
> >         at java.lang.Thread.run(Thread.java:748)
> > > ]
> > > ERROR : Vertex failed, vertexName=Map 1,
> > > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >
> >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > >         at java.util.TimSort.sort(TimSort.java:220)
> > >         at java.util.Arrays.sort(Arrays.java:1438)
> >
> >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> >
> >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > >         at
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
> >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >         at java.lang.Thread.run(Thread.java:748)
> > > ]
> > > ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > killedVertices:0
> > > ERROR : FAILED: Execution Error, return code 2 from
> > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> vertexName=Map
> > > 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >
> >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > >         at java.util.TimSort.sort(TimSort.java:220)
> > >         at java.util.Arrays.sort(Arrays.java:1438)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> >
> >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > >         at
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> >
> >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >         at java.lang.Thread.run(Thread.java:748)
> > > ]Vertex failed, vertexName=Map 1,
> > vertexId=vertex_1666888075798_0001_1_00,
> > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> killed/failed
> > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > >         at java.util.TimSort.sort(TimSort.java:220)
> > >         at java.util.Arrays.sort(Arrays.java:1438)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > >         at
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >         at java.lang.Thread.run(Thread.java:748)
> > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > killedVertices:0
> > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > INFO  : Completed executing
> > >
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > > Time taken: 6.983 seconds
> > > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > > Error: Error while compiling statement: FAILED: Execution Error, return
> > > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2]
> killed/failed
> > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > >         at java.util.TimSort.sort(TimSort.java:220)
> > >         at java.util.Arrays.sort(Arrays.java:1438)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > >         at
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >         at java.lang.Thread.run(Thread.java:748)
> > > ]Vertex failed, vertexName=Map 1,
> > vertexId=vertex_1666888075798_0001_1_00,
> > > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1]
> killed/failed
> > > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> > java.lang.NoSuchMethodError:
> > > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> > >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> > >         at java.util.TimSort.sort(TimSort.java:220)
> > >         at java.util.Arrays.sort(Arrays.java:1438)
> > >         at
> > >
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> > >         at java.security.AccessController.doPrivileged(Native Method)
> > >         at javax.security.auth.Subject.doAs(Subject.java:422)
> > >         at
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> > >         at
> > >
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> > >         at
> > >
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> > >         at
> > >
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > >         at java.lang.Thread.run(Thread.java:748)
> > > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > > killedVertices:0 (state=08S01,code=2)
> >
> >         at
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > >
> >
> > Best regards,
> > Alessandro
> >
> > On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com> wrote:
> >
> > > Chris,
> > > The KEYS file is at:
> > > https://downloads.apache.org/hive/KEYS
> > >
> > > -Ayush
> > >
> > > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cn...@apache.org>
> wrote:
> > >
> > > > Could someone please point me toward the right KEYS file to import so
> > > that
> > > > I can verify signatures? Thanks!
> > > >
> > > > I'm seeing numerous test failures due to "Insufficient configured
> > > threads"
> > > > while trying to start the HTTP server. One example is
> > > > TestBeelinePasswordOption. Is anyone else seeing this? I noticed that
> > > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > > /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
> > > >
> > > > [INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
> > > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > > 11.742 s <<< FAILURE! - in
> > > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time
> > elapsed:
> > > > 11.733 s  <<< ERROR!
> > > > org.apache.hive.service.ServiceException:
> > > java.lang.IllegalStateException:
> > > > Insufficient configured threads: required=4 < max=4 for
> > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > {s=0/1,p=0}]
> > > > at
> > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > > at
> > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > > at
> > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > > at
> > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > > at
> > > >
> > > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > > at
> > > >
> > > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > > at
> > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > > at
> > > >
> > > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > > at
> > > >
> > > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > > at
> > >
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > > at
> > > >
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > > Caused by: java.lang.IllegalStateException: Insufficient configured
> > > > threads: required=4 < max=4 for
> > > >
> > > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > > {s=0/1,p=0}]
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > > at org.eclipse.jetty.io
> > > .SelectorManager.doStart(SelectorManager.java:255)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > > at
> > > >
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > > at
> > > >
> > > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > > at
> > org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > > ... 21 more
> > > >
> > > > Chris Nauroth
> > > >
> > > >
> > > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Thanks for rebuilding this RC, Denys.
> > > > >
> > > > > Alessandro: IMHO since there was no vote cast yet and we're talking
> > > about
> > > > > a build option change only, I guess it just doesn't worth
> rebuilding
> > > the
> > > > > whole stuff from scratch to create a new RC.
> > > > >
> > > > > I give +1 (binding) to this RC, I verified the checksum, binary
> > > content,
> > > > > source content, built Hive from source and also tried out the
> > artifacts
> > > > in
> > > > > a mini cluster environment. Built an HMS DB with the schema scripts
> > > > > provided, did table creation, insert, delete, rollback (Iceberg).
> > > > >
> > > > > Thanks again, Denys for taking this up.
> > > > >
> > > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > > Hi Denys,
> > > > > > in other Apache communities I generally see that votes are
> > cancelled
> > > > and
> > > > > a
> > > > > > new RC is prepared when there are changes or blocking issues like
> > in
> > > > this
> > > > > > case, not sure how things are done in Hive though.
> > > > > >
> > > > > > Best regards,
> > > > > > Alessandro
> > > > > >
> > > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> > dkuzmenko@cloudera.com
> > > > > .invalid>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Adam,
> > > > > > >
> > > > > > > Thanks for pointing that out! Upstream release guide is
> outdated.
> > > > Once
> > > > > I
> > > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > > Updated the release artifacts and checksums:
> > > > > > >
> > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > > here:
> > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > >
> > > > > > >
> > > > > > > The checksums are these:
> > > > > > > -
> > b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > -
> > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > >
> > > > > > > Maven artifacts are available
> > > > > > > here:
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > >
> > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> source
> > > for
> > > > > > > this release in github, you can see it at
> > > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > >
> > > > > > > The git commit hash
> > > > > > > is:
> > > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > >
> > > > > > >
> > > > > > > Please check again.
> > > > > > >
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Denys
> > > > > > >
> > > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org>
> > > wrote:
> > > > > > >
> > > > > > > > Hi Denys,
> > > > > > > >
> > > > > > > > Unfortutantely I can't give a plus 1 on this yet, as the
> > Iceberg
> > > > > > > artifacts
> > > > > > > > are missing from the binary tar.gz. Perhaps -Piceberg flag
> was
> > > > > missing
> > > > > > > > during build, can you please rebuild?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Adam
> > > > > > > >
> > > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > > Hi team,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > > > > here:
> > > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > The checksums are these:
> > > > > > > > > -
> > > > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > > -
> > > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > > >
> > > > > > > > > Maven artifacts are available
> > > > > > > > > here:
> > > > > > > >
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > > >
> > > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> > > source
> > > > > for
> > > > > > > > > this release in github, you can see it at
> > > > > > > > >
> > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > > >
> > > > > > > > > The git commit hash
> > > > > > > > > is:
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > > >
> > > > > > > > > Voting will conclude in 72 hours.
> > > > > > > > >
> > > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Ayush Saxena <ay...@gmail.com>.
Hi Alessandro,
From this:

> $ sw hadoop 3.1.0
> $ sw tez 0.10.0 (tried also 0.10.1)


I guess you are using the wrong versions, The Hadoop version to be used
should be 3.3.1[1] and the Tez version should be 0.10.2[2]

The error also seems to be coming from Hadoop code

> vertex=vertex_1666888075798_0001_1_00 [Map 1],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I


The compareTo method in Hadoop was changed in HADOOP-16196, which isn't
there in Hadoop-3.1.0, it is there post 3.2.1 [3]

Another stuff, TestBeelinePasswordOptions passes for me inside the source
directory.

[*INFO*] -------------------------------------------------------

[*INFO*]  T E S T S

[*INFO*] -------------------------------------------------------

[*INFO*] Running org.apache.hive.beeline.*TestBeelinePasswordOption*

[*INFO*] *Tests run: 10*, Failures: 0, Errors: 0, Skipped: 0, Time elapsed:
18.264 s - in org.apache.hive.beeline.*TestBeelinePasswordOption*

-Ayush

[1]
https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L136
[2]
https://github.com/apache/hive/blob/release-4.0.0-alpha-2-rc0/pom.xml#L197
[3] https://issues.apache.org/jira/browse/HADOOP-16196

On Thu, 27 Oct 2022 at 23:15, Alessandro Solimando <
alessandro.solimando@gmail.com> wrote:

> Hi everyone,
>
> unfortunately my vote is -1 (although non-binding) due to a classpath error
> which prevents queries involving Tez to complete (all the details at the
> end of the email, apologies for the lengthy text but I wanted to provide
> all the context).
>
> - verified gpg signature: OK
>
> $ wget https://www.apache.org/dist/hive/KEYS
>
> $ gpg --import KEYS
>
> ...
>
> $ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
> apache-hive-4.0.0-alpha-2-bin.tar.gz
>
> gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST
>
> gpg:                using RSA key 50606DE1BDBD5CF862A595A907C5682DAFC73125
>
> gpg:                issuer "dkuzmenko@apache.org"
>
> gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> dkuzmenko@apache.org>" [unknown]
>
> gpg: WARNING: The key's User ID is not certified with a trusted signature!
>
> gpg:          There is no indication that the signature belongs to the
> owner.
>
> Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7 3125
>
> $ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
> apache-hive-4.0.0-alpha-2-src.tar.gz
>
> gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST
>
> gpg:                using RSA key 50606DE1BDBD5CF862A595A907C5682DAFC73125
>
> gpg:                issuer "dkuzmenko@apache.org"
>
> gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
> dkuzmenko@apache.org>" [unknown]
>
> gpg: WARNING: The key's User ID is not certified with a trusted signature!
>
> gpg:          There is no indication that the signature belongs to the
> owner.
>
> Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7 3125
>
> (AFAIK, this warning is OK)
>
> - verified package checksum: OK
>
> $ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256) <(shasum -a 256
> apache-hive-4.0.0-alpha-2-src.tar.gz)
>
> $ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256) <(shasum -a 256
> apache-hive-4.0.0-alpha-2-bin.tar.gz)
>
> - verified maven build (no tests): OK
>
> $ mvn clean install -DskipTests
>
> ...
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] BUILD SUCCESS
>
> [INFO]
> ------------------------------------------------------------------------
>
> [INFO] Total time:  04:31 min
>
> - checked release notes: OK
>
> - checked few modules in Nexus: OK
>
> - environment used:
>
> $ sw_vers
>
> ProductName: macOS
>
> ProductVersion: 11.6.8
>
> BuildVersion: 20G730
>
> $ mvn --version
>
> Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)
>
> Maven home: .../.sdkman/candidates/maven/current
>
> Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
> .../.sdkman/candidates/java/8.0.292.hs-adpt/jre
>
> Default locale: en_IE, platform encoding: UTF-8
>
> OS name: "mac os x", version: "10.16", arch: "x86_64", family: "mac"
>
> $ java -version
>
> openjdk version "1.8.0_292"
>
> OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)
>
> OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed mode)
>
>
> Testing in hive-dev-box (https://github.com/kgyrtkirk/hive-dev-box): KO
>
> This is the setup I have used:
>
> $ sw hadoop 3.1.0
>
> $ sw tez 0.10.0 (tried also 0.10.1)
>
> $ sw hive
>
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz
>
> In what follows the test data and query I have tried, with associated
> stacktrace for the error. It seems a classpath issue, probably there are
> multiple versions of the class ending up in the CP and the classloader
> happened to load the “wrong one”.
>
> CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;
>
>
> > INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> > INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> > INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> > INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> > INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> > INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> > INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> > INSERT INTO test_stats_a (a, b) VALUES (14, NULL);
>
>
> > CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;
>
>
> > INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> > INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> > INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> > INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> > INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> > INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> > INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> > INSERT INTO test_stats_b (a, b) VALUES (14, NULL);
>
>
>
> CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;
>
>
> > INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> > INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> > INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> > INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> > INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> > INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> > INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> > INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);
>
>
> SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE
> > t1.b < 3 AND t2.b > 1;
>
>
> INFO  : Completed compiling
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > Time taken:
> > 4.171 seconds
> > INFO  : Operation QUERY obtained 0 locks
> > INFO  : Executing
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> > SELECT * FROM test_sta
> > ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > INFO  : Query ID =
> dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > INFO  : Total jobs = 1
> > INFO  : Launching Job 1 out of 1
> > INFO  : Starting task [Stage-1:MAPRED] in serial mode
> > DEBUG : Task getting executed using mapred tag :
> > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> > v
> > INFO  : Subscribed to counters: [] for queryId:
> > dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> > INFO  : Tez session hasn't been created yet. Opening session
> > DEBUG : No local resources to process (other than hive-exec)
> > INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1 (Stage-1)
> > DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> > test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> > .a) WHERE t1.b < 3 AND t2.b > 1"}
> > DEBUG : Setting Tez DAG access for
> > queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b with
> > viewAclStr
> > ing=dev, modifyStr=dev
> > INFO  : Setting tez.task.scale.memory.reserve-fraction to
> > 0.30000001192092896
> > INFO  : HS2 Host: [alpha2], Query ID:
> > [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag ID:
> > [dag_1666888075798_0001_1], DAG Session ID:
> [application_1666888075798_0001]
> > INFO  : Status: Running (Executing on YARN cluster with App id
> > application_1666888075798_0001)
>
>
> > ERROR : Status: Failed
> > ERROR : Vertex failed, vertexName=Map 2,
> > vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>
>         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> >         at java.util.TimSort.sort(TimSort.java:220)
> >         at java.util.Arrays.sort(Arrays.java:1438)
>
>         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
>
>         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>
>         at
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>
>         at java.lang.Thread.run(Thread.java:748)
> > ]
> > ERROR : Vertex failed, vertexName=Map 1,
> > vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> > vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>
>         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> >         at java.util.TimSort.sort(TimSort.java:220)
> >         at java.util.Arrays.sort(Arrays.java:1438)
>
>         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
>
>         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> >         at
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>
>         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > ]
> > ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > killedVertices:0
> > ERROR : FAILED: Execution Error, return code 2 from
> > org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map
> > 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> > vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> > to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>
>         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> >         at java.util.TimSort.sort(TimSort.java:220)
> >         at java.util.Arrays.sort(Arrays.java:1438)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
>
>         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> >         at
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>
>         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > ]Vertex failed, vertexName=Map 1,
> vertexId=vertex_1666888075798_0001_1_00,
> > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1] killed/failed
> > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> >         at java.util.TimSort.sort(TimSort.java:220)
> >         at java.util.Arrays.sort(Arrays.java:1438)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> >         at
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > killedVertices:0
> > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > INFO  : Completed executing
> > command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> > Time taken: 6.983 seconds
> > DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> > test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> > Error: Error while compiling statement: FAILED: Execution Error, return
> > code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> > vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> > diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2] killed/failed
> > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> > vertex=vertex_1666888075798_0001_1_01 [Map 2],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> >         at java.util.TimSort.sort(TimSort.java:220)
> >         at java.util.Arrays.sort(Arrays.java:1438)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> >         at
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > ]Vertex failed, vertexName=Map 1,
> vertexId=vertex_1666888075798_0001_1_00,
> > diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1] killed/failed
> > due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> > vertex=vertex_1666888075798_0001_1_00 [Map 1],
> java.lang.NoSuchMethodError:
> > org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
> >         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
> >         at java.util.TimSort.sort(TimSort.java:220)
> >         at java.util.Arrays.sort(Arrays.java:1438)
> >         at
> >
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:422)
> >         at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
> >         at
> >
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
> >         at
> >
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
> >         at
> >
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >         at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >         at java.lang.Thread.run(Thread.java:748)
> > ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> > killedVertices:0 (state=08S01,code=2)
>
>         at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >
>
> Best regards,
> Alessandro
>
> On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com> wrote:
>
> > Chris,
> > The KEYS file is at:
> > https://downloads.apache.org/hive/KEYS
> >
> > -Ayush
> >
> > On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cn...@apache.org> wrote:
> >
> > > Could someone please point me toward the right KEYS file to import so
> > that
> > > I can verify signatures? Thanks!
> > >
> > > I'm seeing numerous test failures due to "Insufficient configured
> > threads"
> > > while trying to start the HTTP server. One example is
> > > TestBeelinePasswordOption. Is anyone else seeing this? I noticed that
> > > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > > /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
> > >
> > > [INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
> > > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> > > 11.742 s <<< FAILURE! - in
> > > org.apache.hive.beeline.TestBeelinePasswordOption
> > > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time
> elapsed:
> > > 11.733 s  <<< ERROR!
> > > org.apache.hive.service.ServiceException:
> > java.lang.IllegalStateException:
> > > Insufficient configured threads: required=4 < max=4 for
> > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > {s=0/1,p=0}]
> > > at
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > > at
> > >
> > >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > at
> > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > > at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > at java.lang.reflect.Method.invoke(Method.java:498)
> > > at
> > >
> > >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > > at
> > >
> > >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > > at
> > >
> > >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > > at
> > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > > at
> > >
> > >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > > at
> > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > > at
> > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > > at
> > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > > at
> > >
> > >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > > at
> > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > > at
> > >
> > >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > > at
> > org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > > at
> > >
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > > Caused by: java.lang.IllegalStateException: Insufficient configured
> > > threads: required=4 < max=4 for
> > >
> > >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > > {s=0/1,p=0}]
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > > at org.eclipse.jetty.io
> > .SelectorManager.doStart(SelectorManager.java:255)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > > at
> > >
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > > at
> > >
> > >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > > at
> org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > > ... 21 more
> > >
> > > Chris Nauroth
> > >
> > >
> > > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org> wrote:
> > >
> > > > Hi,
> > > >
> > > > Thanks for rebuilding this RC, Denys.
> > > >
> > > > Alessandro: IMHO since there was no vote cast yet and we're talking
> > about
> > > > a build option change only, I guess it just doesn't worth rebuilding
> > the
> > > > whole stuff from scratch to create a new RC.
> > > >
> > > > I give +1 (binding) to this RC, I verified the checksum, binary
> > content,
> > > > source content, built Hive from source and also tried out the
> artifacts
> > > in
> > > > a mini cluster environment. Built an HMS DB with the schema scripts
> > > > provided, did table creation, insert, delete, rollback (Iceberg).
> > > >
> > > > Thanks again, Denys for taking this up.
> > > >
> > > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > > Hi Denys,
> > > > > in other Apache communities I generally see that votes are
> cancelled
> > > and
> > > > a
> > > > > new RC is prepared when there are changes or blocking issues like
> in
> > > this
> > > > > case, not sure how things are done in Hive though.
> > > > >
> > > > > Best regards,
> > > > > Alessandro
> > > > >
> > > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <
> dkuzmenko@cloudera.com
> > > > .invalid>
> > > > > wrote:
> > > > >
> > > > > > Hi Adam,
> > > > > >
> > > > > > Thanks for pointing that out! Upstream release guide is outdated.
> > > Once
> > > > I
> > > > > > receive the edit rights, I'll amend the instructions.
> > > > > > Updated the release artifacts and checksums:
> > > > > >
> > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > here:
> > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > >
> > > > > >
> > > > > > The checksums are these:
> > > > > > -
> b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > -
> 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > >
> > > > > > Maven artifacts are available
> > > > > > here:
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > >
> > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source
> > for
> > > > > > this release in github, you can see it at
> > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > >
> > > > > > The git commit hash
> > > > > > is:
> > > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > >
> > > > > >
> > > > > > Please check again.
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > > Denys
> > > > > >
> > > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org>
> > wrote:
> > > > > >
> > > > > > > Hi Denys,
> > > > > > >
> > > > > > > Unfortutantely I can't give a plus 1 on this yet, as the
> Iceberg
> > > > > > artifacts
> > > > > > > are missing from the binary tar.gz. Perhaps -Piceberg flag was
> > > > missing
> > > > > > > during build, can you please rebuild?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Adam
> > > > > > >
> > > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > > Hi team,
> > > > > > > >
> > > > > > > >
> > > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > > > here:
> > > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > > >
> > > > > > > >
> > > > > > > > The checksums are these:
> > > > > > > > -
> > > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > > -
> > > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > > >
> > > > > > > > Maven artifacts are available
> > > > > > > > here:
> > > > > > >
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > > >
> > > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> > source
> > > > for
> > > > > > > > this release in github, you can see it at
> > > > > > > >
> https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > > >
> > > > > > > > The git commit hash
> > > > > > > > is:
> > > > > > >
> > > > > >
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > > >
> > > > > > > > Voting will conclude in 72 hours.
> > > > > > > >
> > > > > > > > Hive PMC Members: Please test and vote.
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Alessandro Solimando <al...@gmail.com>.
Hi everyone,

unfortunately my vote is -1 (although non-binding) due to a classpath error
which prevents queries involving Tez to complete (all the details at the
end of the email, apologies for the lengthy text but I wanted to provide
all the context).

- verified gpg signature: OK

$ wget https://www.apache.org/dist/hive/KEYS

$ gpg --import KEYS

...

$ gpg --verify apache-hive-4.0.0-alpha-2-bin.tar.gz.asc
apache-hive-4.0.0-alpha-2-bin.tar.gz

gpg: Signature made Thu 27 Oct 15:11:48 2022 CEST

gpg:                using RSA key 50606DE1BDBD5CF862A595A907C5682DAFC73125

gpg:                issuer "dkuzmenko@apache.org"

gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
dkuzmenko@apache.org>" [unknown]

gpg: WARNING: The key's User ID is not certified with a trusted signature!

gpg:          There is no indication that the signature belongs to the
owner.

Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7 3125

$ gpg --verify apache-hive-4.0.0-alpha-2-src.tar.gz.asc
apache-hive-4.0.0-alpha-2-src.tar.gz

gpg: Signature made Thu 27 Oct 15:12:08 2022 CEST

gpg:                using RSA key 50606DE1BDBD5CF862A595A907C5682DAFC73125

gpg:                issuer "dkuzmenko@apache.org"

gpg: Good signature from "Denys Kuzmenko (CODE SIGNING KEY) <
dkuzmenko@apache.org>" [unknown]

gpg: WARNING: The key's User ID is not certified with a trusted signature!

gpg:          There is no indication that the signature belongs to the
owner.

Primary key fingerprint: 5060 6DE1 BDBD 5CF8 62A5  95A9 07C5 682D AFC7 3125

(AFAIK, this warning is OK)

- verified package checksum: OK

$ diff <(cat apache-hive-4.0.0-alpha-2-src.tar.gz.sha256) <(shasum -a 256
apache-hive-4.0.0-alpha-2-src.tar.gz)

$ diff <(cat apache-hive-4.0.0-alpha-2-bin.tar.gz.sha256) <(shasum -a 256
apache-hive-4.0.0-alpha-2-bin.tar.gz)

- verified maven build (no tests): OK

$ mvn clean install -DskipTests

...

[INFO]
------------------------------------------------------------------------

[INFO] BUILD SUCCESS

[INFO]
------------------------------------------------------------------------

[INFO] Total time:  04:31 min

- checked release notes: OK

- checked few modules in Nexus: OK

- environment used:

$ sw_vers

ProductName: macOS

ProductVersion: 11.6.8

BuildVersion: 20G730

$ mvn --version

Apache Maven 3.8.1 (05c21c65bdfed0f71a2f2ada8b84da59348c4c5d)

Maven home: .../.sdkman/candidates/maven/current

Java version: 1.8.0_292, vendor: AdoptOpenJDK, runtime:
.../.sdkman/candidates/java/8.0.292.hs-adpt/jre

Default locale: en_IE, platform encoding: UTF-8

OS name: "mac os x", version: "10.16", arch: "x86_64", family: "mac"

$ java -version

openjdk version "1.8.0_292"

OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_292-b10)

OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.292-b10, mixed mode)


Testing in hive-dev-box (https://github.com/kgyrtkirk/hive-dev-box): KO

This is the setup I have used:

$ sw hadoop 3.1.0

$ sw tez 0.10.0 (tried also 0.10.1)

$ sw hive
https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/apache-hive-4.0.0-alpha-2-bin.tar.gz

In what follows the test data and query I have tried, with associated
stacktrace for the error. It seems a classpath issue, probably there are
multiple versions of the class ending up in the CP and the classloader
happened to load the “wrong one”.

CREATE TABLE test_stats_a (a int, b int) STORED AS ORC;


> INSERT INTO test_stats_a (a, b) VALUES (0, 2);
> INSERT INTO test_stats_a (a, b) VALUES (1, 2);
> INSERT INTO test_stats_a (a, b) VALUES (2, 2);
> INSERT INTO test_stats_a (a, b) VALUES (3, 2);
> INSERT INTO test_stats_a (a, b) VALUES (4, 2);
> INSERT INTO test_stats_a (a, b) VALUES (5, 2);
> INSERT INTO test_stats_a (a, b) VALUES (6, 2);
> INSERT INTO test_stats_a (a, b) VALUES (7, 2);
> INSERT INTO test_stats_a (a, b) VALUES (8, 3);
> INSERT INTO test_stats_a (a, b) VALUES (9, 4);
> INSERT INTO test_stats_a (a, b) VALUES (10, 5);
> INSERT INTO test_stats_a (a, b) VALUES (11, 6);
> INSERT INTO test_stats_a (a, b) VALUES (12, 7);
> INSERT INTO test_stats_a (a, b) VALUES (13, NULL);
> INSERT INTO test_stats_a (a, b) VALUES (14, NULL);


> CREATE TABLE test_stats_b (a int, b int) STORED AS ORC;


> INSERT INTO test_stats_b (a, b) VALUES (0, 2);
> INSERT INTO test_stats_b (a, b) VALUES (1, 2);
> INSERT INTO test_stats_b (a, b) VALUES (2, 2);
> INSERT INTO test_stats_b (a, b) VALUES (3, 2);
> INSERT INTO test_stats_b (a, b) VALUES (4, 2);
> INSERT INTO test_stats_b (a, b) VALUES (5, 2);
> INSERT INTO test_stats_b (a, b) VALUES (6, 2);
> INSERT INTO test_stats_b (a, b) VALUES (7, 2);
> INSERT INTO test_stats_b (a, b) VALUES (8, 3);
> INSERT INTO test_stats_b (a, b) VALUES (9, 4);
> INSERT INTO test_stats_b (a, b) VALUES (10, 5);
> INSERT INTO test_stats_b (a, b) VALUES (11, 6);
> INSERT INTO test_stats_b (a, b) VALUES (12, 7);
> INSERT INTO test_stats_b (a, b) VALUES (13, NULL);
> INSERT INTO test_stats_b (a, b) VALUES (14, NULL);



CREATE TABLE test_stats_c (a string, b int) STORED AS PARQUET;


> INSERT INTO test_stats_c (a, b) VALUES ("a", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("b", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("c", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("d", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("e", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("f", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("g", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("h", 2);
> INSERT INTO test_stats_c (a, b) VALUES ("i", 3);
> INSERT INTO test_stats_c (a, b) VALUES ("j", 4);
> INSERT INTO test_stats_c (a, b) VALUES ("k", 5);
> INSERT INTO test_stats_c (a, b) VALUES ("l", 6);
> INSERT INTO test_stats_c (a, b) VALUES ("m", 7);
> INSERT INTO test_stats_c (a, b) VALUES ("n", NULL);
> INSERT INTO test_stats_c (a, b) VALUES ("o", NULL);


SELECT * FROM test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE
> t1.b < 3 AND t2.b > 1;


INFO  : Completed compiling
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> Time taken:
> 4.171 seconds
> INFO  : Operation QUERY obtained 0 locks
> INFO  : Executing
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b):
> SELECT * FROM test_sta
> ts_a t1 JOIN test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> INFO  : Query ID = dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> INFO  : Total jobs = 1
> INFO  : Launching Job 1 out of 1
> INFO  : Starting task [Stage-1:MAPRED] in serial mode
> DEBUG : Task getting executed using mapred tag :
> dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b,userid=de
> v
> INFO  : Subscribed to counters: [] for queryId:
> dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b
> INFO  : Tez session hasn't been created yet. Opening session
> DEBUG : No local resources to process (other than hive-exec)
> INFO  : Dag name: SELECT * FROM test_st...... < 3 AND t2.b > 1 (Stage-1)
> DEBUG : DagInfo: {"context":"Hive","description":"SELECT * FROM
> test_stats_a t1 JOIN test_stats_b t2 ON (t1.a = t2
> .a) WHERE t1.b < 3 AND t2.b > 1"}
> DEBUG : Setting Tez DAG access for
> queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b with
> viewAclStr
> ing=dev, modifyStr=dev
> INFO  : Setting tez.task.scale.memory.reserve-fraction to
> 0.30000001192092896
> INFO  : HS2 Host: [alpha2], Query ID:
> [dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b], Dag ID:
> [dag_1666888075798_0001_1], DAG Session ID: [application_1666888075798_0001]
> INFO  : Status: Running (Executing on YARN cluster with App id
> application_1666888075798_0001)


> ERROR : Status: Failed
> ERROR : Vertex failed, vertexName=Map 2,
> vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> vertex=vertex_1666888075798_0001_1_01 [Map 2], java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I

        at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
>         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
>         at java.util.TimSort.sort(TimSort.java:220)
>         at java.util.Arrays.sort(Arrays.java:1438)

        at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)

        at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)

        at
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

        at java.lang.Thread.run(Thread.java:748)
> ]
> ERROR : Vertex failed, vertexName=Map 1,
> vertexId=vertex_1666888075798_0001_1_00, diagnostics=[Vertex
> vertex_1666888075798_0001_1_00 [Map 1] killed/failed due
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> vertex=vertex_1666888075798_0001_1_00 [Map 1], java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I

        at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
>         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
>         at java.util.TimSort.sort(TimSort.java:220)
>         at java.util.Arrays.sort(Arrays.java:1438)

        at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)

        at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>         at
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> ]
> ERROR : DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> killedVertices:0
> ERROR : FAILED: Execution Error, return code 2 from
> org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map
> 2, vertexId=vertex_1666888075798_0001_1_01, diagnostics=[Vertex
> vertex_1666888075798_0001_1_01 [Map 2] killed/failed due
> to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> vertex=vertex_1666888075798_0001_1_01 [Map 2], java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I

        at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
>         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
>         at java.util.TimSort.sort(TimSort.java:220)
>         at java.util.Arrays.sort(Arrays.java:1438)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)

        at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>         at
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)

        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> ]Vertex failed, vertexName=Map 1, vertexId=vertex_1666888075798_0001_1_00,
> diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1] killed/failed
> due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> vertex=vertex_1666888075798_0001_1_00 [Map 1], java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
>         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
>         at java.util.TimSort.sort(TimSort.java:220)
>         at java.util.Arrays.sort(Arrays.java:1438)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>         at
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> killedVertices:0
> DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> INFO  : Completed executing
> command(queryId=dev_20221027162822_de90e30b-a80a-427e-869a-b71799222f4b);
> Time taken: 6.983 seconds
> DEBUG : Shutting down query SELECT * FROM test_stats_a t1 JOIN
> test_stats_b t2 ON (t1.a = t2.a) WHERE t1.b < 3 AND t2.b > 1
> Error: Error while compiling statement: FAILED: Execution Error, return
> code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed,
> vertexName=Map 2, vertexId=vertex_1666888075798_0001_1_01,
> diagnostics=[Vertex vertex_1666888075798_0001_1_01 [Map 2] killed/failed
> due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t2 initializer failed,
> vertex=vertex_1666888075798_0001_1_01 [Map 2], java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
>         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
>         at java.util.TimSort.sort(TimSort.java:220)
>         at java.util.Arrays.sort(Arrays.java:1438)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>         at
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> ]Vertex failed, vertexName=Map 1, vertexId=vertex_1666888075798_0001_1_00,
> diagnostics=[Vertex vertex_1666888075798_0001_1_00 [Map 1] killed/failed
> due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: t1 initializer failed,
> vertex=vertex_1666888075798_0001_1_00 [Map 1], java.lang.NoSuchMethodError:
> org.apache.hadoop.fs.Path.compareTo(Lorg/apache/hadoop/fs/Path;)I
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:415)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator$InputSplitComparator.compare(HiveSplitGenerator.java:401)
>         at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
>         at java.util.TimSort.sort(TimSort.java:220)
>         at java.util.Arrays.sort(Arrays.java:1438)
>         at
> org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:254)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:281)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:272)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:272)
>         at
> org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:256)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>         at
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
>         at
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>         at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
> ]DAG did not succeed due to VERTEX_FAILURE. failedVertices:2
> killedVertices:0 (state=08S01,code=2)

        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>

Best regards,
Alessandro

On Thu, 27 Oct 2022 at 19:01, Ayush Saxena <ay...@gmail.com> wrote:

> Chris,
> The KEYS file is at:
> https://downloads.apache.org/hive/KEYS
>
> -Ayush
>
> On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cn...@apache.org> wrote:
>
> > Could someone please point me toward the right KEYS file to import so
> that
> > I can verify signatures? Thanks!
> >
> > I'm seeing numerous test failures due to "Insufficient configured
> threads"
> > while trying to start the HTTP server. One example is
> > TestBeelinePasswordOption. Is anyone else seeing this? I noticed that
> > HIVE-24484 set hive.server2.webui.max.threads to 4 in
> > /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
> >
> > [INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
> > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> > 11.742 s <<< FAILURE! - in
> > org.apache.hive.beeline.TestBeelinePasswordOption
> > [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time elapsed:
> > 11.733 s  <<< ERROR!
> > org.apache.hive.service.ServiceException:
> java.lang.IllegalStateException:
> > Insufficient configured threads: required=4 < max=4 for
> >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > {s=0/1,p=0}]
> > at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> > at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> > at
> >
> >
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> > at
> >
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > at
> >
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> > at
> >
> >
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> > at
> >
> >
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> > at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> > at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> > at
> >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> > at
> >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> > at
> >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> > at
> >
> >
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> > at
> >
> >
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> > at
> >
> >
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> > at
> org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> > at
> > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> > Caused by: java.lang.IllegalStateException: Insufficient configured
> > threads: required=4 < max=4 for
> >
> >
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> > {s=0/1,p=0}]
> > at
> >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> > at
> >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> > at
> >
> >
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> > at org.eclipse.jetty.io
> .SelectorManager.doStart(SelectorManager.java:255)
> > at
> >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > at
> >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> > at
> >
> >
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> > at
> >
> >
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> > at
> >
> >
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> > at
> >
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> > at
> >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> > at
> >
> >
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> > at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> > at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> > ... 21 more
> >
> > Chris Nauroth
> >
> >
> > On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org> wrote:
> >
> > > Hi,
> > >
> > > Thanks for rebuilding this RC, Denys.
> > >
> > > Alessandro: IMHO since there was no vote cast yet and we're talking
> about
> > > a build option change only, I guess it just doesn't worth rebuilding
> the
> > > whole stuff from scratch to create a new RC.
> > >
> > > I give +1 (binding) to this RC, I verified the checksum, binary
> content,
> > > source content, built Hive from source and also tried out the artifacts
> > in
> > > a mini cluster environment. Built an HMS DB with the schema scripts
> > > provided, did table creation, insert, delete, rollback (Iceberg).
> > >
> > > Thanks again, Denys for taking this up.
> > >
> > > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > > Hi Denys,
> > > > in other Apache communities I generally see that votes are cancelled
> > and
> > > a
> > > > new RC is prepared when there are changes or blocking issues like in
> > this
> > > > case, not sure how things are done in Hive though.
> > > >
> > > > Best regards,
> > > > Alessandro
> > > >
> > > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dkuzmenko@cloudera.com
> > > .invalid>
> > > > wrote:
> > > >
> > > > > Hi Adam,
> > > > >
> > > > > Thanks for pointing that out! Upstream release guide is outdated.
> > Once
> > > I
> > > > > receive the edit rights, I'll amend the instructions.
> > > > > Updated the release artifacts and checksums:
> > > > >
> > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > here:
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > >
> > > > >
> > > > > The checksums are these:
> > > > > - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > >
> > > > > Maven artifacts are available
> > > > > here:
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > >
> > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source
> for
> > > > > this release in github, you can see it at
> > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > >
> > > > > The git commit hash
> > > > > is:
> > > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > >
> > > > >
> > > > > Please check again.
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Denys
> > > > >
> > > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org>
> wrote:
> > > > >
> > > > > > Hi Denys,
> > > > > >
> > > > > > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> > > > > artifacts
> > > > > > are missing from the binary tar.gz. Perhaps -Piceberg flag was
> > > missing
> > > > > > during build, can you please rebuild?
> > > > > >
> > > > > > Thanks,
> > > > > > Adam
> > > > > >
> > > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > > Hi team,
> > > > > > >
> > > > > > >
> > > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > > here:
> > > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > > >
> > > > > > >
> > > > > > > The checksums are these:
> > > > > > > -
> > 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > > -
> > 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > > >
> > > > > > > Maven artifacts are available
> > > > > > > here:
> > > > > >
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > > >
> > > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the
> source
> > > for
> > > > > > > this release in github, you can see it at
> > > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > > >
> > > > > > > The git commit hash
> > > > > > > is:
> > > > > >
> > > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > > >
> > > > > > > Voting will conclude in 72 hours.
> > > > > > >
> > > > > > > Hive PMC Members: Please test and vote.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Ayush Saxena <ay...@gmail.com>.
Chris,
The KEYS file is at:
https://downloads.apache.org/hive/KEYS

-Ayush

On Thu, 27 Oct 2022 at 21:58, Chris Nauroth <cn...@apache.org> wrote:

> Could someone please point me toward the right KEYS file to import so that
> I can verify signatures? Thanks!
>
> I'm seeing numerous test failures due to "Insufficient configured threads"
> while trying to start the HTTP server. One example is
> TestBeelinePasswordOption. Is anyone else seeing this? I noticed that
> HIVE-24484 set hive.server2.webui.max.threads to 4 in
> /data/conf/hive-site.xml. (The default in HiveConf.java is 50.)
>
> [INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
> 11.742 s <<< FAILURE! - in
> org.apache.hive.beeline.TestBeelinePasswordOption
> [ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time elapsed:
> 11.733 s  <<< ERROR!
> org.apache.hive.service.ServiceException: java.lang.IllegalStateException:
> Insufficient configured threads: required=4 < max=4 for
>
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> {s=0/1,p=0}]
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
> at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
> at
>
> org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
>
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at
>
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at
>
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at
>
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> at
>
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> at
>
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> at
>
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> at
>
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> at
>
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> at
>
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
> at
>
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
> at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
> at
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
> Caused by: java.lang.IllegalStateException: Insufficient configured
> threads: required=4 < max=4 for
>
> QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
> {s=0/1,p=0}]
> at
>
> org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
> at
>
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
> at
>
> org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
> at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:255)
> at
>
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> at
>
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> at
>
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> at
>
> org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
> at
>
> org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
> at
> org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
> at
>
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> at org.eclipse.jetty.server.Server.doStart(Server.java:401)
> at
>
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
> at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
> ... 21 more
>
> Chris Nauroth
>
>
> On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org> wrote:
>
> > Hi,
> >
> > Thanks for rebuilding this RC, Denys.
> >
> > Alessandro: IMHO since there was no vote cast yet and we're talking about
> > a build option change only, I guess it just doesn't worth rebuilding the
> > whole stuff from scratch to create a new RC.
> >
> > I give +1 (binding) to this RC, I verified the checksum, binary content,
> > source content, built Hive from source and also tried out the artifacts
> in
> > a mini cluster environment. Built an HMS DB with the schema scripts
> > provided, did table creation, insert, delete, rollback (Iceberg).
> >
> > Thanks again, Denys for taking this up.
> >
> > On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > > Hi Denys,
> > > in other Apache communities I generally see that votes are cancelled
> and
> > a
> > > new RC is prepared when there are changes or blocking issues like in
> this
> > > case, not sure how things are done in Hive though.
> > >
> > > Best regards,
> > > Alessandro
> > >
> > > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dkuzmenko@cloudera.com
> > .invalid>
> > > wrote:
> > >
> > > > Hi Adam,
> > > >
> > > > Thanks for pointing that out! Upstream release guide is outdated.
> Once
> > I
> > > > receive the edit rights, I'll amend the instructions.
> > > > Updated the release artifacts and checksums:
> > > >
> > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > >
> > > >
> > > > The checksums are these:
> > > > - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > >
> > > > Maven artifacts are available
> > > > here:
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > >
> > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > > > this release in github, you can see it at
> > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > >
> > > > The git commit hash
> > > > is:
> > > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > >
> > > >
> > > > Please check again.
> > > >
> > > >
> > > > Thanks,
> > > > Denys
> > > >
> > > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:
> > > >
> > > > > Hi Denys,
> > > > >
> > > > > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> > > > artifacts
> > > > > are missing from the binary tar.gz. Perhaps -Piceberg flag was
> > missing
> > > > > during build, can you please rebuild?
> > > > >
> > > > > Thanks,
> > > > > Adam
> > > > >
> > > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > > Hi team,
> > > > > >
> > > > > >
> > > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > > here:
> > https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > > >
> > > > > >
> > > > > > The checksums are these:
> > > > > > -
> 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > > -
> 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > > >
> > > > > > Maven artifacts are available
> > > > > > here:
> > > > >
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > > >
> > > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source
> > for
> > > > > > this release in github, you can see it at
> > > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > > >
> > > > > > The git commit hash
> > > > > > is:
> > > > >
> > > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > > >
> > > > > > Voting will conclude in 72 hours.
> > > > > >
> > > > > > Hive PMC Members: Please test and vote.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Chris Nauroth <cn...@apache.org>.
Could someone please point me toward the right KEYS file to import so that
I can verify signatures? Thanks!

I'm seeing numerous test failures due to "Insufficient configured threads"
while trying to start the HTTP server. One example is
TestBeelinePasswordOption. Is anyone else seeing this? I noticed that
HIVE-24484 set hive.server2.webui.max.threads to 4 in
/data/conf/hive-site.xml. (The default in HiveConf.java is 50.)

[INFO] Running org.apache.hive.beeline.TestBeelinePasswordOption
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed:
11.742 s <<< FAILURE! - in org.apache.hive.beeline.TestBeelinePasswordOption
[ERROR] org.apache.hive.beeline.TestBeelinePasswordOption  Time elapsed:
11.733 s  <<< ERROR!
org.apache.hive.service.ServiceException: java.lang.IllegalStateException:
Insufficient configured threads: required=4 < max=4 for
QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
{s=0/1,p=0}]
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:733)
at org.apache.hive.jdbc.miniHS2.MiniHS2.start(MiniHS2.java:395)
at
org.apache.hive.beeline.TestBeelinePasswordOption.preTests(TestBeelinePasswordOption.java:60)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at
org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:377)
at
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:138)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:465)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:451)
Caused by: java.lang.IllegalStateException: Insufficient configured
threads: required=4 < max=4 for
QueuedThreadPool[hiveserver2-web]@628bd77e{STARTED,4<=4<=4,i=4,r=-1,q=0}[ReservedThreadExecutor@cfacf0
{s=0/1,p=0}]
at
org.eclipse.jetty.util.thread.ThreadPoolBudget.check(ThreadPoolBudget.java:165)
at
org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseTo(ThreadPoolBudget.java:141)
at
org.eclipse.jetty.util.thread.ThreadPoolBudget.leaseFrom(ThreadPoolBudget.java:191)
at org.eclipse.jetty.io.SelectorManager.doStart(SelectorManager.java:255)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
at
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
at
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:321)
at
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:81)
at
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.server.Server.doStart(Server.java:401)
at
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.apache.hive.http.HttpServer.start(HttpServer.java:335)
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:729)
... 21 more

Chris Nauroth


On Thu, Oct 27, 2022 at 7:48 AM Ádám Szita <sz...@apache.org> wrote:

> Hi,
>
> Thanks for rebuilding this RC, Denys.
>
> Alessandro: IMHO since there was no vote cast yet and we're talking about
> a build option change only, I guess it just doesn't worth rebuilding the
> whole stuff from scratch to create a new RC.
>
> I give +1 (binding) to this RC, I verified the checksum, binary content,
> source content, built Hive from source and also tried out the artifacts in
> a mini cluster environment. Built an HMS DB with the schema scripts
> provided, did table creation, insert, delete, rollback (Iceberg).
>
> Thanks again, Denys for taking this up.
>
> On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> > Hi Denys,
> > in other Apache communities I generally see that votes are cancelled and
> a
> > new RC is prepared when there are changes or blocking issues like in this
> > case, not sure how things are done in Hive though.
> >
> > Best regards,
> > Alessandro
> >
> > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dkuzmenko@cloudera.com
> .invalid>
> > wrote:
> >
> > > Hi Adam,
> > >
> > > Thanks for pointing that out! Upstream release guide is outdated. Once
> I
> > > receive the edit rights, I'll amend the instructions.
> > > Updated the release artifacts and checksums:
> > >
> > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > >
> > >
> > > The checksums are these:
> > > - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > >
> > > Maven artifacts are available
> > > here:
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > >
> > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > > this release in github, you can see it at
> > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > >
> > > The git commit hash
> > > is:
> > >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > >
> > >
> > > Please check again.
> > >
> > >
> > > Thanks,
> > > Denys
> > >
> > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:
> > >
> > > > Hi Denys,
> > > >
> > > > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> > > artifacts
> > > > are missing from the binary tar.gz. Perhaps -Piceberg flag was
> missing
> > > > during build, can you please rebuild?
> > > >
> > > > Thanks,
> > > > Adam
> > > >
> > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > Hi team,
> > > > >
> > > > >
> > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > here:
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > >
> > > > >
> > > > > The checksums are these:
> > > > > - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > >
> > > > > Maven artifacts are available
> > > > > here:
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > >
> > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source
> for
> > > > > this release in github, you can see it at
> > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > >
> > > > > The git commit hash
> > > > > is:
> > > >
> > >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > >
> > > > > Voting will conclude in 72 hours.
> > > > >
> > > > > Hive PMC Members: Please test and vote.
> > > > >
> > > > > Thanks
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Ádám Szita <sz...@apache.org>.
Hi,

Thanks for rebuilding this RC, Denys.

Alessandro: IMHO since there was no vote cast yet and we're talking about a build option change only, I guess it just doesn't worth rebuilding the whole stuff from scratch to create a new RC.

I give +1 (binding) to this RC, I verified the checksum, binary content, source content, built Hive from source and also tried out the artifacts in a mini cluster environment. Built an HMS DB with the schema scripts provided, did table creation, insert, delete, rollback (Iceberg).

Thanks again, Denys for taking this up.

On 2022/10/27 13:29:36 Alessandro Solimando wrote:
> Hi Denys,
> in other Apache communities I generally see that votes are cancelled and a
> new RC is prepared when there are changes or blocking issues like in this
> case, not sure how things are done in Hive though.
> 
> Best regards,
> Alessandro
> 
> On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dk...@cloudera.com.invalid>
> wrote:
> 
> > Hi Adam,
> >
> > Thanks for pointing that out! Upstream release guide is outdated. Once I
> > receive the edit rights, I'll amend the instructions.
> > Updated the release artifacts and checksums:
> >
> > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> >
> >
> > The checksums are these:
> > - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > apache-hive-4.0.0-alpha-2-src.tar.gz
> >
> > Maven artifacts are available
> > here:
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> >
> > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > this release in github, you can see it at
> > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> >
> > The git commit hash
> > is:
> > https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> >
> >
> > Please check again.
> >
> >
> > Thanks,
> > Denys
> >
> > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:
> >
> > > Hi Denys,
> > >
> > > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> > artifacts
> > > are missing from the binary tar.gz. Perhaps -Piceberg flag was missing
> > > during build, can you please rebuild?
> > >
> > > Thanks,
> > > Adam
> > >
> > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > Hi team,
> > > >
> > > >
> > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > >
> > > >
> > > > The checksums are these:
> > > > - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > >
> > > > Maven artifacts are available
> > > > here:
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > >
> > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > > > this release in github, you can see it at
> > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > >
> > > > The git commit hash
> > > > is:
> > >
> > https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > >
> > > > Voting will conclude in 72 hours.
> > > >
> > > > Hive PMC Members: Please test and vote.
> > > >
> > > > Thanks
> > > >
> > >
> >
> 

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Alessandro Solimando <al...@gmail.com>.
Sorry, I have misread the comment.

If the code hasn't changed and the tag for the RC is still pointing to the
same code, I don't think there is a need for a new RC.

On Thu, 27 Oct 2022 at 15:48, Denys Kuzmenko <dk...@cloudera.com.invalid>
wrote:

> Hi Alessandro,
>
> There were no code changes, just missing artifacts due to an outdated
> release guide (iceberg bits are generated only under iceberg profile).
> Not sure that we should create new RC in that case. Naveen, what
> do you think?
>
>
> On Thu, Oct 27, 2022 at 3:30 PM Alessandro Solimando <
> alessandro.solimando@gmail.com> wrote:
>
> > Hi Denys,
> > in other Apache communities I generally see that votes are cancelled and
> a
> > new RC is prepared when there are changes or blocking issues like in this
> > case, not sure how things are done in Hive though.
> >
> > Best regards,
> > Alessandro
> >
> > On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dkuzmenko@cloudera.com
> > .invalid>
> > wrote:
> >
> > > Hi Adam,
> > >
> > > Thanks for pointing that out! Upstream release guide is outdated. Once
> I
> > > receive the edit rights, I'll amend the instructions.
> > > Updated the release artifacts and checksums:
> > >
> > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > >
> > >
> > > The checksums are these:
> > > - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > >
> > > Maven artifacts are available
> > > here:
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > >
> > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > > this release in github, you can see it at
> > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > >
> > > The git commit hash
> > > is:
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > >
> > >
> > > Please check again.
> > >
> > >
> > > Thanks,
> > > Denys
> > >
> > > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:
> > >
> > > > Hi Denys,
> > > >
> > > > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> > > artifacts
> > > > are missing from the binary tar.gz. Perhaps -Piceberg flag was
> missing
> > > > during build, can you please rebuild?
> > > >
> > > > Thanks,
> > > > Adam
> > > >
> > > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > > Hi team,
> > > > >
> > > > >
> > > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > > here:
> https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > > >
> > > > >
> > > > > The checksums are these:
> > > > > - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > > >
> > > > > Maven artifacts are available
> > > > > here:
> > > >
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > > >
> > > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source
> for
> > > > > this release in github, you can see it at
> > > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > > >
> > > > > The git commit hash
> > > > > is:
> > > >
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > > >
> > > > > Voting will conclude in 72 hours.
> > > > >
> > > > > Hive PMC Members: Please test and vote.
> > > > >
> > > > > Thanks
> > > > >
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Denys Kuzmenko <dk...@cloudera.com.INVALID>.
Hi Alessandro,

There were no code changes, just missing artifacts due to an outdated
release guide (iceberg bits are generated only under iceberg profile).
Not sure that we should create new RC in that case. Naveen, what
do you think?


On Thu, Oct 27, 2022 at 3:30 PM Alessandro Solimando <
alessandro.solimando@gmail.com> wrote:

> Hi Denys,
> in other Apache communities I generally see that votes are cancelled and a
> new RC is prepared when there are changes or blocking issues like in this
> case, not sure how things are done in Hive though.
>
> Best regards,
> Alessandro
>
> On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dkuzmenko@cloudera.com
> .invalid>
> wrote:
>
> > Hi Adam,
> >
> > Thanks for pointing that out! Upstream release guide is outdated. Once I
> > receive the edit rights, I'll amend the instructions.
> > Updated the release artifacts and checksums:
> >
> > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> >
> >
> > The checksums are these:
> > - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > apache-hive-4.0.0-alpha-2-src.tar.gz
> >
> > Maven artifacts are available
> > here:
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> >
> > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > this release in github, you can see it at
> > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> >
> > The git commit hash
> > is:
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> >
> >
> > Please check again.
> >
> >
> > Thanks,
> > Denys
> >
> > On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:
> >
> > > Hi Denys,
> > >
> > > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> > artifacts
> > > are missing from the binary tar.gz. Perhaps -Piceberg flag was missing
> > > during build, can you please rebuild?
> > >
> > > Thanks,
> > > Adam
> > >
> > > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > > Hi team,
> > > >
> > > >
> > > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > > >
> > > >
> > > > The checksums are these:
> > > > - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > > >
> > > > Maven artifacts are available
> > > > here:
> > > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > > >
> > > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > > > this release in github, you can see it at
> > > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > > >
> > > > The git commit hash
> > > > is:
> > >
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > > >
> > > > Voting will conclude in 72 hours.
> > > >
> > > > Hive PMC Members: Please test and vote.
> > > >
> > > > Thanks
> > > >
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Alessandro Solimando <al...@gmail.com>.
Hi Denys,
in other Apache communities I generally see that votes are cancelled and a
new RC is prepared when there are changes or blocking issues like in this
case, not sure how things are done in Hive though.

Best regards,
Alessandro

On Thu, 27 Oct 2022 at 15:22, Denys Kuzmenko <dk...@cloudera.com.invalid>
wrote:

> Hi Adam,
>
> Thanks for pointing that out! Upstream release guide is outdated. Once I
> receive the edit rights, I'll amend the instructions.
> Updated the release artifacts and checksums:
>
> Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
>
>
> The checksums are these:
> - b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
> apache-hive-4.0.0-alpha-2-bin.tar.gz
> - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> apache-hive-4.0.0-alpha-2-src.tar.gz
>
> Maven artifacts are available
> here:
> https://repository.apache.org/content/repositories/orgapachehive-1117/
>
> The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> this release in github, you can see it at
> https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
>
> The git commit hash
> is:
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
>
>
> Please check again.
>
>
> Thanks,
> Denys
>
> On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:
>
> > Hi Denys,
> >
> > Unfortutantely I can't give a plus 1 on this yet, as the Iceberg
> artifacts
> > are missing from the binary tar.gz. Perhaps -Piceberg flag was missing
> > during build, can you please rebuild?
> >
> > Thanks,
> > Adam
> >
> > On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > > Hi team,
> > >
> > >
> > > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> > >
> > >
> > > The checksums are these:
> > > - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > > apache-hive-4.0.0-alpha-2-src.tar.gz
> > >
> > > Maven artifacts are available
> > > here:
> > https://repository.apache.org/content/repositories/orgapachehive-1117/
> > >
> > > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > > this release in github, you can see it at
> > > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> > >
> > > The git commit hash
> > > is:
> >
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> > >
> > > Voting will conclude in 72 hours.
> > >
> > > Hive PMC Members: Please test and vote.
> > >
> > > Thanks
> > >
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Denys Kuzmenko <dk...@cloudera.com.INVALID>.
Hi Adam,

Thanks for pointing that out! Upstream release guide is outdated. Once I
receive the edit rights, I'll amend the instructions.
Updated the release artifacts and checksums:

Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/


The checksums are these:
- b4dbaac5530694f631af13677ffe5443addc148bd94176b27a109a6da67f5e0f
apache-hive-4.0.0-alpha-2-bin.tar.gz
- 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
apache-hive-4.0.0-alpha-2-src.tar.gz

Maven artifacts are available
here:https://repository.apache.org/content/repositories/orgapachehive-1117/

The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
this release in github, you can see it at
https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0

The git commit hash
is:https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c


Please check again.


Thanks,
Denys

On Thu, Oct 27, 2022 at 2:53 PM Ádám Szita <sz...@apache.org> wrote:

> Hi Denys,
>
> Unfortutantely I can't give a plus 1 on this yet, as the Iceberg artifacts
> are missing from the binary tar.gz. Perhaps -Piceberg flag was missing
> during build, can you please rebuild?
>
> Thanks,
> Adam
>
> On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> > Hi team,
> >
> >
> > Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> > here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> >
> >
> > The checksums are these:
> > - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> > apache-hive-4.0.0-alpha-2-bin.tar.gz
> > - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> > apache-hive-4.0.0-alpha-2-src.tar.gz
> >
> > Maven artifacts are available
> > here:
> https://repository.apache.org/content/repositories/orgapachehive-1117/
> >
> > The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> > this release in github, you can see it at
> > https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> >
> > The git commit hash
> > is:
> https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> >
> > Voting will conclude in 72 hours.
> >
> > Hive PMC Members: Please test and vote.
> >
> > Thanks
> >
>

Re: [VOTE] Apache Hive 4.0.0-alpha-2 Release Candidate 0

Posted by Ádám Szita <sz...@apache.org>.
Hi Denys,

Unfortutantely I can't give a plus 1 on this yet, as the Iceberg artifacts are missing from the binary tar.gz. Perhaps -Piceberg flag was missing during build, can you please rebuild?

Thanks,
Adam

On 2022/10/25 11:20:23 Denys Kuzmenko wrote:
> Hi team,
> 
> 
> Apache Hive 4.0.0-alpha-2 Release Candidate 0 is available
> here:https://people.apache.org/~dkuzmenko/release-4.0.0-alpha-2-rc0/
> 
> 
> The checksums are these:
> - 7d4c54ecfe2b04cabc283a84defcc1e8a02eed0e13baba2a2c91ae882b6bfaf7
> apache-hive-4.0.0-alpha-2-bin.tar.gz
> - 8c4639915e9bf649f4a55cd9adb9d266aa15d8fa48ddfadb28ebead2c0aee4d0
> apache-hive-4.0.0-alpha-2-src.tar.gz
> 
> Maven artifacts are available
> here:https://repository.apache.org/content/repositories/orgapachehive-1117/
> 
> The tag release-4.0.0-alpha-2-rc0 has been applied to the source for
> this release in github, you can see it at
> https://github.com/apache/hive/tree/release-4.0.0-alpha-2-rc0
> 
> The git commit hash
> is:https://github.com/apache/hive/commit/da146200e003712e324496bf560a1702485d231c
> 
> Voting will conclude in 72 hours.
> 
> Hive PMC Members: Please test and vote.
> 
> Thanks
>